I am writing a query to return some production data. Basically i need to insert either 1 or 2 rows into a Table variable based on a decision as to does the production part make 1 or 2 items ( The Raw data does not allow for this it comes from a look up in my database)
I can retrieve all the source data i need easily but when i come to insert it into the table variable i need to insert 1 record if its a single part or 2 records if its a twin part. I know could use a cursor but im sure there has to be an easier way !
Below is the code i have at the moment
declare @startdate as datetime declare @enddate as datetime declare @Line as Integer DECLARE @count INT
set @startdate = '2015-01-01' set @enddate = '2015-01-31'
I've got a piece of code that returns 53 records when using just the SELECT section.When I change it to INSERT INTO ..... SELECT it only inserts 39 records into the receiving table.There are no keys/contraints/indices or anything else on the receiving table (it's just a dumping ground for some data that will be processed later).
The code for creating the table is here:- USE [CDSExtractInpatients6.2] GO /****** Object: Table [dbo].[CDS_Inpatients_CDS_Feeds_Import] Script Date: 22/05/2015 15:54:15 ******/ SET ANSI_NULLS ON GO
[code]...
I know most of the date fields are being created as varchar on here, but this is something I inherited and the SELECT is outputting the dates as text.Don't know if it makes any difference, but the server is running SQL2008.
In a t-sql 2012 sql script, I have the following script, that only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value or 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number JOIN TST.dbo.Lock LK ON LKR.lockID = LK.lockID JOIN TST.dbo.LockCombination LKC ON LK.lockID = LKC.lockID WHERE LKC.seq = 2
A normal select statement looks like the following:
select * from TST.dbo.Locker LKR JOIN TST.dbo.Lock LK ON LKR.lockID = LK.lockID JOIN TST.dbo.LockCombination LKC ON LK.lockID = LKC.lockID where LKR.number in (000,001,1237)
In case you need the ddl statements for the tables affected here are the ddl statements:
CREATE TABLE [dbo].[Locker]( [lockerID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [schoolID] [int] NOT NULL, [number] [varchar](10) NOT NULL, [serialNumber] [varchar](20) NULL, [type] [varchar](3) NULL, [locationID] [int] NULL,
Running this code on my PC via VS 2005 .Net version 2.0.50727 on the server (shown in IIS) Code is in ASP.NET 2.0 and is a VB.NET Console application SSIS 2005
Problem & Info:
I am bringing in an Excel file. I need to first strip out any non-detail rows such as the breaks you see with totals and what not. I should in the end have only detail rows left before I start moving them into my SQL Table. I'm not sure how to first strip this information out in SSIS specfically how down to the right component and how to actually code the component to do this based on my Excel file here: http://www.webfound.net/excelfile.xls
Then, I assume I just use a Flat File Source coponent or something to actually take the columns in the Excel and split into an OLE DB Datasource to shove each column into a corresponding column in my SQL Server Table. I have used a Flat File Source in the past to do so with a comma delimited txt file but never tried with an Excel.
Desired Help:
How to perform
1) stripping out all undesired rows 2) importing each column into sql table
Hi, Good morning to all.My table: User_Group_Map(UserID UNIQUEIDENTIFIER,GroupID UNIQUEIDENTIFIER) Now, I want to write one stored procedure that can insert rows into the above table, but more number of rows at-once. Means, the program should allow multiple insertions without the need to call the stored procedure from front-end more number of times. Can anyone please help me on this... Thanks in advance...Ashok kumar.
I am finding it difficult to find an example that allows for insertion of additional rows into a table, without dropping the table I'm inserting into. Or inserting specific values. Like this example..
[URL] ....
I have 6 table I am formatting the data to conform to the final table as I'm inserting it into, but none of these examples gives me the example needed. I am using SQL 2012.
<code> SELECT CONVERT(VARCHAR(50),[FName]) + ' ' + CONVERT(VARCHAR(50),[LName]) AS [CustName] ,CAST('ALARMCOM' as nvarchar(8)) as VendorName ,CONVERT(VARCHAR(25),[CUSTOMER_CS_ACCOUNT_NUMBER]) AS [Cust_ID] ,CONVERT(VARCHAR(40),[Charge_Description])as [ChargeType] ,CASE
I have encountered some weird behaviour. Code that has been working for "eternities" suddenly started to fail. I couldn't recreate it on any other machines.
I have written a sample script to illustrate the issue:
INSERT INTO #t_parcels (parcel_id, current_pos, end_pos) SELECT1, 100000000000.0, 900000000000.0
[Code] ..
The last insert crashed with :"Arithmetic overflow error converting expression to data type int", even though there are no rows that satisfies the condition!
This is due to "diff" column having wrong datatype, BUT, the insert had no hits in the database. So how can inserting 0 rows crash with incorrect datatype?
I even copied the select so it was ran before the insert, in in that case, the SELECT completed successfully.
When i changed datatype in the table, the error went away, but I'm still curious what led to the error.
I am trying to BULK INSERT csv files using a stored procedure in SQL SERVER 2008R2 SP3. Although the files contain several thousand lines and BULK INSERT returns no errors, no data is actually imported into the table. Every field in the table is a NVARCHAR(50) datatype.
Here is the code for the operation (only the parameters for the insert itself):
set @open = 'bulk insert [DWHStaging].[dbo].[Abverkaufsquote] from ''' set @path = 'G:DataStagingDWHStagingSourceAbverkaufsquote' set @params = ''' with (firstrow = 2 , datafiletype = ''widechar'' , fieldterminator = '';'' , rowterminator = '' '' , codepage = ''1252'' , keepnulls);'
The csv file originates from a DB2 database. Using exactly the same code base I can import several other types of CSV files without problem.
The files are stored on the local server with as UCS2 Little Endian and one difference is that the files that do not import do not include a BOM. The other difference is that the failed files are non-UNICODE files.
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
I create a Trigger that allows to create news row on other table.
ALTER TRIGGER [dbo].[TI_Creation_Contact_dansSLX] ON [dbo].[_IMPORT_FILES_CONTACTS] AFTER INSERT AS
[code]...
But if I create an INSERT with 50 rows.. My table CONTACT and ADDRESS possess just one line.I try to create a Cursor.. but I had 50 lines with an AdressID and a ContactID differently, but an Account and an AccountId egual on my CONTACT table :
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details: VB2008 Pro .NET Frameworks 2.0 SQL Compact Edition V3.5 Encryption = Engine Default Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
cmd.CommandText = "SELECT * FROM BackupDatabase" cmd.ExecuteNonQuery() Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1) rec.SetString(2, Full_Folder_Address1) rec.SetString(3, File1) rec.SetSqlString(4, File_Size_KB1) rec.SetSqlString(5, Schedule_To_Run1) rec.SetSqlString(6, Backup_Time1) rec.SetSqlString(7, Last_Run1) rec.SetSqlString(8, Result1) rec.SetSqlString(9, Last_Modfied1) rec.SetSqlString(10, Latest_Modfied1) rec.SetSqlBoolean(11, Backup_On_Next_Run1) rec.SetSqlString(12, Total_Backup_Times1) rec.SetSqlString(13, Server_File_Number1) rec.SetSqlString(14, Server_Number1) rec.SetSqlString(15, File_Break_Down1) rec.SetSqlString(16, No_Of_Servers1) rec.SetSqlString(17, Full_File_Address1) rec.SetSqlBoolean(18, OverWriteQuick) rec.SetSqlBoolean(19, CompressAns) rs.Insert(rec) Catch e As Exception MessageBox.Show(e.Message) Finally conn.Close() End Try End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
I have created a trigger that is set off every time a new item has been added to TableA.The trigger then inserts 4 rows into TableB that contains two columns (item, task type).
Each row will have the same item, but with a different task type.ie.
I have a table with PO#,Days_to_travel, and Days_warehouse fields. I take the distinct Days_in_warehouse values in the table and insert them into a temp table. I want a script that will insert all of the values in the Days_in_warehouse field from the temp table into the Days_in_warehouse_batch row in table 1 by PO# duplicating the PO records until all of the POs have a record per distinct value.
Example:
Temp table: (Contains only one field with all distinct values in table 1)
From what I know, when you execute an INSERT INTO table1 (field0) SELECT field0 FROM table 2 ORDER BY field0 the rows are inserted in whatever order the server engine thinks is better at that moment. Is it any way I can have field0 in table1 inserted in the order I need?
My problem is that I can not touch table1, it is used by a legacy application which I am not allowed to modify. I was thinking about creating a clustered index, adding an id field, etc, but I can not know how this will affect the front end application.
table1 and table2 have only a few hundred records.
table2 = CREATE TABLE [Test_FinalPlasma] ([nameID] [int] NOT NULL , [name] [varchar] (2000) NULL ) The update that is not giving the "good" order =
DELETE FROM FinalPlasma INSERT INTO FinalPlasma SELECT name from dbo.Test_FinalPlasma
Unfortunately I can not order the [name] field after the update, because it looks something like
Donald McQ. Shaver Mark Sheilds R.J. Shirley W.E. Sills Kenneth A. Smee A. Britton Smith LCol Edward W. Smith Harry V. Smith M. E. Southern Timothy A. Sparling Spectrum Investment Management Limited
Hey guys, i need to insert 10,000 rows into the database and I am wondering what the best way to do that is. Should I create a StringBuilder and build the 10,000 insert statements and execute it in 1 connection. Or should I be looking into creating a dataset and use an adapter to insert? Any ideas? Please only answer from experience and provide a sample in c# Thanks.
i have a table with about 40 columns in access 97( terrible i knw, butits not mine and im not allowed normalize or change it) . i have addeda column , which is a number , called year. it will contain 1997,1998, etc. there are about 1600 rows in the table. is there anythinglike a correlated sub query that will allow me to insert this into thedate column of each row??
Does anyone have experience of geting SQL server to accept > 100 inserts per second? We use a stored procedure to insert data and this has increased throughput from 30/sec with ODBC to 100-110 /sec but we desperately need to write to the DB faster than this!! There doesnt seem to be any I/O backlog so i am assuming this "ceiling" is due to network overhead. (BCP can insert 3-4000 rows quite happily) Is there any way to batch up these inserts or process them differently in order to improve throughput?
I am currently trying to insert or import some rows into a table and sql server always seems to sort it by one of the columns in a different order that I insert the data. I would appreciate any feedback on this issue. Here is my table structure.
columnA columnB columnC columnD columnE columnF columnG char char int int int smallint char
it keeps sorting by column F and seperates them by odds and evens. Does any have a clue why this is happening? I am just using these two inserts.
insert into tableNAME values('AA', 'A55', 0, 31, 1, 1, 3) insert into tableNAME values('AA', 'A55', 0, 31, 1, 2, 2)
These two rows would be seperated by any other rows already contained in the table. If I add more rows. It lumps them by odds and evens.
Hi, I need to insert rows into table1 from table2 and table3 but I don't want to insert repeated combinations of col2, col3. So, table1 has the primary key col2, col3.
This the table1:
create table table1( col1 int not null, col2 int not null, col3 int not null, constraint PK_table1 primary key (col2, col3) )
This is my "insert" code:
INSERT INTO table1 SELECT table2.col1,table2.col2, table3.col3 FROM table2, table3 WHERE table2.col1 = table3.col1
Hi! I have a database in Sql Server 2005 and I want to transfer the data from 2005 to the Sql Server CE database. I've created a number of tables in CE but how do I write sql to transfer data?
I have several rows I need to update OR insert. Right now, the code deletes all the "possibles" rows and then inserts. I don't like this way because the index would be recreated each time it does do a DELETE/INSERT action.
Is there any "good practice" when inserting/updating rows? Any better way to do it?
Doesn't work because of parsing error. Appearantly semicolon isn't understood in commandText, although if commandText is executed in SQL Management Studio it executes flawlessly.
2) SqlCeCommand command = Connection.CreateCommand(); command.CommandText = "INSERT INTO [Table] (col1, col2) SELECT val1, val2 UNION ALL SELECT val11, val12"; if (Connection.State != System.Data.ConnectionState.Closed) { Connection.Close(); } Connection.Open(); command.ExecuteNonQuery(); Connection.Close();
Using this method i found out bug (or so i think).
I need to insert around 10000 rows of data and wouldn't want to run Connection.Open(); Command.Execute(); Connection.Close(); cycle for 10000 times.
I keep getting this error but it will only insert the 1st row into my database table The variable name '@CustId' has already been declared. Variable names must be unique within a query batch or stored procedure.
Protected Sub Button2_Click(ByVal sender As Object, ByVal e As System.EventArgs)
Dim drow As GridViewRow
For Each drow In GridView1.Rows
Dim textBoxText As String = CType(drow.FindControl("Label2"), Label).Text
I have recently started an ASP.Net application and am having some issues updating, inserting and deleting rows. When I started working with it, I was getting errors because it could not find any update command. Eventually, I figured out how to automatically generate the commands, by configuring my SQLDataSource control and clicking the "advanced" button. Right now though, I have generated the commands, but I still can not insert, update or delete rows. When I attempt to update anything, I recieve an error that says "The data types text and nvarchar are incompatible in the equal to operator." Nowhere in my table do I have any rows that use the datatype "nvarchar", only "text" and "int". I tried switching all of my text columns to "nvarchar(500)", which did not help. I am led to believe that the auto generated SQL procedures are trying to do something behind the scenes that is making my database act up, because even when I delete rows, I get the same exception, so the datatypes cannot be messed up there, because all that the datasource is doing is deleting rows, therefore there is no need to worry about data types. I only get the error when I check the "Use optimistic concurrency" box. When I do not use optimistic concurrency, I can delete, insert, and update rows... but nothing happens. There are no errors, but nothing is deleted, updated or inserted either. Upon postback, nothing has changed. I may upload a copy of the exact exception page, if someone thinks that it may help. Here is the update command that was generated: UPDATE [Record Information] SET [Speed] = @Speed, [Recording Company] = @Recording_Company, [Year] = @Year, [Artist] = @Artist, [Side 1 Track Title] = @Side_1_Track_Title, [Side 1 Track Duration] = @Side_1_Track_Duration, [Side 2 Track Title] = @Side_2_Track_Title, [Side 2 Track Duration] = @Side_2_Track_Duration, [Sleeve Description] = @Sleeve_Description WHERE [Record Database ID] = @original_Record_Database_ID Apparently no stored procedures exist for any of these operations, and I am unsure why. The "Record Database ID" is my identity column, and is the only field that is (and is supposed to be) uneditable.
Hello,I have 4 tables having Customer, Customer_personal_info, Customer_Financial_info, Customer_Other_infoIn this Customer table had a primary key CustomerID , related with every other table with fkey.I want to insert data into four tables using one form having TABs .I created class and storedProcedures to insert row for each table.How to execute all four classes using beginTrans-commitTrans-Rollback-EndTrans. Thanking you,
Hi Guys, My little bulk insert is only bringing every second row of a CSV file. this is not good as i need every row. My SQLcommand is thus. InsertCommand="BULK INSERT TBL_Unitel_services FROM 'C:/webroot/servicedesk/csvs_Services/csv.csv' WITH (FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR = '', MAXERRORS = 0) "
I have an "insert into" statement that creates two identical rows in a table, with this statement: delete from [table] where [column] = @parameterINSERT INTO [table]([fields]) VALUES ([parameter values]) This is the code-behind that performs the insert: Dim dbConn As New SqlConnection(strConn)Dim cmd As New SqlCommand("sp_CreateUser", dbConn)cmd.CommandType = Data.CommandType.StoredProcedurecmd.Parameters.AddWithValue("@UserID", strUserID)cmd.Parameters.AddWithValue("@UserName", strUserName)cmd.Parameters.AddWithValue("@Email", strEmail)cmd.Parameters.AddWithValue("@FirstName", strFirstName)cmd.Parameters.AddWithValue("@LastName", strLastName)cmd.Parameters.AddWithValue("@Teacher", strTeacher)cmd.Parameters.AddWithValue("@GradYr", lngGradYr)Using dbConndbConn.Open()cmd.ExecuteNonQuery()dbConn.Close()cmd.Dispose()dbConn.Dispose()End Using I wonder if it inserts twice due to a postback issue. Is there a way to stop two rows from being created in the first place with the same "insert into" statement? I'd appreciate any advice.
Hi,i m using sqlexpress 2005 and sql management express studio. I want to know how could i insert multiple records on a single query in a table?i also want to whats wrong with this insert query?DROP TABLE IF EXISTS `tblcountry`;CREATE TABLE `tblcountry` ( `ID` int(3) NOT NULL auto_increment, `LCID` int(4) unsigned default '0', `CountryCode` char(2) default NULL, `Country` varchar(50) default NULL, `CountryInt` varchar(50) default NULL, `Language` varchar(50) default NULL, `Standard` tinyint(1) unsigned default '0', `Active` tinyint(1) unsigned default '0', PRIMARY KEY (`ID`)) ENGINE=MyISAM AUTO_INCREMENT=4 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;INSERT INTO `tblcountry` (`ID`,`LCID`,`CountryCode`,`Country`,`CountryInt`,`Language`,`Standard`,`Active`) VALUES (1,1030,'DK','Danmark','Denmark','Dansk',0,1), (2,2057,'GB','England','England','English',1,1), (3,1031,'DE','Deutschland','Germany','Deutsch',0,1);finally how could i extract the database structure and data from the sql express management studio?so that i can copy it and re use it some other computer.Thanks.Jack.
I am trying to insert multiple rows into a table getting data from a web form.
I have 2 fields being passed from a web form to the below stored procedure @FormBidlistID Sample Data --> 500 @CheckBoxListContractors Sample Data --> 124,125,145,154,156,
The below DELETE function is working great. However for some reason or another the INSERT INTO sql command is not putting seperating the string and adding rows to the database? I am not sure where the error is occuring.
In the end I need the data to go into the database columns like so
CREATE PROCEDURE dbo.UpdateBidlistContractors @FormBidlistID int, @CheckBoxListContractors varchar(3999) AS DELETE FROM BidlistContractors WHERE BidlistID = @FormBidlistID DECLARE @ContractorID nvarchar(10) DECLARE @BidlistID nvarchar(10) DECLARE @startPosition int DECLARE @commaPosition int SET @startPosition =1 SET @commaPosition = 0 WHILE (@startPosition < LEN(@ContractorID)) BEGIN SET @commaPosition = CHARINDEX(' , ' , @ContractorID, @startPosition) SET @ContractorID= SUBSTRING(@ContractorID, @startPosition, @commaPosition - @startPosition) INSERT INTO BidlistContractors (BidlistID, ContractorID) VALUES (@BidlistID, @ContractorID) SET @startPosition = @startPosition + LEN(@ContractorID) + 1 END GO
Where am I going wrong? Thank you in advance for any help. :-)
I am stuck. I have some vars being passed to an aspx page that I need to dump into a db table in multiple rows, how do I do it? I am going into a SQL database using VS.Net 2003 here's the format that the vars come in the page as: UserID = 46 k12SessionArray0interaction_id = Interaction_01 k12SessionArray0correct_response = d k12SessionArray0student_response = d k12SessionArray0result = C k12SessionArray0latency = 00:00:02 k12SessionArray1interaction_id = Interaction_02 k12SessionArray1correct_response = c k12SessionArray1student_response = c k12SessionArray1result = C k12SessionArray1latency = 00:00:02 k12SessionArray2interaction_id = Interaction_03 k12SessionArray2correct_response = a k12SessionArray2student_response = a k12SessionArray2result = C k12SessionArray2latency = 00:00:03 now here's the format of the database AnswerID | UserID | InteractionID | CorrectResponse | StudentResponse | QResult | Latency How do I insert the data to have it be mulitiple rows like: 1 | 46 | Interaction_01 | d | d | C | 00:00:02 2 | 46 | Interaction_02 | c | c | C | 00:00:02 3 | 46 | Interaction_03 | a | a | C | 00:00:03