Fastest Way To Insert Range/sequence
Jul 20, 2005
I'd like to use a stored procedure to insert large amounts of records
into a table. My field A should be filled with a given range of
numbers. I do the following ... but I'm sure there is a better
(faster) way:
select @start = max(A) from tbl where B = 'test1' and C = 'test2'
while @start <= 500000
begin
insert into tbl (A, B, C)
values (@start, 'test1', test2')
set @start = @start +1
end
another question is, how to prevent that another user inserts the same
numbers into the field A?
Thanks a lot for any help!
ratu
View 5 Replies
ADVERTISEMENT
Mar 10, 2008
What is the fast way a stored procedure can copy a table from a linked server?
I would like to tune this statement, possibly with hints or other logging options. Assume that table_A and table_B have the exact table structure and that I want to preserve table_A and all its indexes and contraints. The table will be truncated before this load, if that helps in any way.
insert into table_A select * from OpenQuery(Server,'select * from Table_B')
TIA, Mike
View 4 Replies
View Related
Nov 14, 2007
I'm writing a program that allows users to upload a csv file. This file is then seperated into 4 datatables based on certain criteria then each datatable is uploaded into my database. I'm essentially adding new rows to the datatables then running an update command on each using a tableadapter. The problem is that these csv files can be large and can end up with 4000+ new records being added to the database and the update commands take a while to do it. I've sat for about five minutes on one run while it updated. I put in some time variable to see where all the time is spent and it takes only seconds to parse the data and seperate into the datatables, but minutes on the update commands. Is there a more efficient way to insert this much data?
View 4 Replies
View Related
Feb 22, 2007
I tried to insert large pool of data from table A to table B. Table B is then exported to Excel for viewing purpose. However, i found that some of the rows are not inserted in order, they seemed to be inserted in between other rows that are inserted before them. May I know what is the problem? There is no key assigned to table B. Do we need to disable key in order to have items inserted in sequence order?
in other word, instead of "insert", can we "append" records?
View 9 Replies
View Related
Mar 25, 2002
What I want to do is: insert into newtable (field1, field2, sequence) select
(fielda, fieldb, #) from oldtable and have the sequence field "re-seed" to 1 every time the value in fielda/field1 changes. This added requirement makes this not an identity field problem, if I understand identity fields. This is a data conversion problem, so I'm converting 1000's of rows from oldtable to newtable and want the sequence to re-seed at every change in value of the fielda/field1 data; in fact fielda/field1 is a simplification as there could be multiple controlling fields forcing a re-seed.
TIA...Al
View 2 Replies
View Related
Jun 11, 2008
SOURCE TABLE
ID________COMMENT
123_______I am joe
123_______I am programmer
124_______I am Wang
124_______I am programmer
124_______I like cricket
DESTINATION TABLE
ID_____SEQ______COMMENT
123_____1_______I am joe
123_____2_______I am programmer
124_____1_______I am wang
124_____2_______I am programmer
124_____3_______I like cricket
can somebody please advise the easiest way to do this in sql 2000?
View 3 Replies
View Related
Jul 23, 2005
How do I get the next int value for a column before I do an insert inMY SQL Server 2000? I'm currently using Oracle sequence and doingsomething like:select seq.nextval from dual;Then I do my insert into 3 different table all using the same uniqueID.I can't use the @@identity function because my application uses aconnection pool and it's not garanteed that a connection won't be usedby another request so under a lot of load there could be major problemsand this doens't work:insert into <table>;select @@identity;This doesn't work because the select @@identity might give me the valueof an insert from someone else's request.Thanks,Brent
View 4 Replies
View Related
Feb 22, 2005
Hi, can anyone teach me how to automatic create a invoice number and insert or update it to a column?
View 2 Replies
View Related
Jul 29, 2015
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC
SET LKC.combo = lockCombo2
FROM [LockerPopulation] A
JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type
JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
[Code] ....
View 10 Replies
View Related
May 16, 2008
Hi I was wondering if anyone else was running into the problem where SSIS inserts data into a range incorrectly.
I have a Raw File Source in a Data Flow. The data is manipulated and then defined to insert into a named range within an Excel Worksheet. I have a task prior to the data flow that takes a template and copies it to another location. Within the Template I have defined several named ranges.
The Data Flow inserts the data in the line below the named range. For instance if I have a named range defined as B1:C1
the insert occurs on B2:C2
Does anyone know why this is happening or how I can get it to insert into the Range I have specified?
Thanks in advance!
View 3 Replies
View Related
Jun 20, 2006
Hi ,
i was used the Follwing DataFlow for my Package.using Oracle 8i
FalteFile Source -------------> Data Conversion --------------->OLEDB Destination (Oracle Data table)
using above control flow to map the Source file to Destination . When i run the SSIS Package teh Folwing Error i got
"Truncation Occur maydue to inserting data from data flow column "columnName " with a length of 50 "
regarding this Error i i understood its for happening Data Length . so that i was changed the Source Column Length Exactly Match with the The Destination table.
still i am getting this Error. pls any one give me a solution . SHould i Change the DataType also?
pls give your suggestion
Thanks & Regards
Jeyakumar.M
chennai
View 3 Replies
View Related
Sep 16, 2013
I am trying to pull MTD and YTD sales for this year and last year.
I have this line to pull all invoices between two dates:
WHERE (invc_hdr.ivh_dt Between '08/01/2013' And '08/31/13')
But I want to be able to put the dates in without having to change the code each time. I want a window to pop up so I can input the date range. Is there a way to do this.
View 5 Replies
View Related
Feb 7, 2007
I need to import some data into a table that is in a merge bi-directional publication. The table has an identity column that is being "auto managed". The ID numbers I need to import are not within any of the assigned "ranges". They are less than any of the current ranges.
Turning ident_insert on does not work because of the check constraints error. The merge replication triggers validate the ID range as well.
Edit: using SQL 2005 standard publisher and subsciber. Publication is 2005.
View 7 Replies
View Related
Sep 14, 2007
hi
i need to insert the list of ipaddress using stored procedures.
the user will give the from and to range of IP ADDRESS.i've to insert all the possible ip address between those values.
how to do this..
View 3 Replies
View Related
Jul 11, 2012
I want insert automatically records for Range that user definded
For example user define :
from to city
10000 12000 tehran
15000 19000 babol
I should inserted 2000 record for tehran (10000,10001,....,12000) and 4000 record for babol (15000,15001,...,19000)
View 3 Replies
View Related
Jun 25, 2014
Given a Table1 with two columns 'Name' with some N rows of data and another Table2 with one column 'SeqNo' with N rows, each of which contains a unique integer which can be ordered monotonically, I want to do an INSERT into some Table3 with two columns 'Name' and 'SeqNo' such that each INSERT'd row gets one of the unique integers.
E.g. -
Table1 contains 'Fred','Tom','Mary','Larry'
Table2 contains 6000978,6000979,6000980,6000981
INSERT INTO Table3
SELECT Table1.Name
Table2.SeqNo
FROM Table1
And I want to get Table3
'Fred',6000978
'Tom'.6000979
'Mary',6000980
'Larry',6000981
How can I reference Table2 so that Table2.SeqNo will 'line up' properly? Note that the ordering of the SeqNo values isn't mandatory as long as each SeqNo is assigned to one and only one row.
On edit: Table2 isn't required, it's just the way I started thinking about it. It would be nicer to just have two integer vars, @StartSeqNo = 6000978 and @EndSeqNo = 6000981 for he example above. Either way is fine.
View 3 Replies
View Related
Aug 21, 2014
I have two tables, customers and appointments.
I want to bulk insert records for a date range and a day of the week.
For example, John Smith has an appointment on every monday for the next three months. How do I accomplish this?
View 1 Replies
View Related
Aug 16, 2006
I am attempting to write a SQL query that retrieves info processed between two times (ie. 2:00 pm to 6:00 pm) during a date range (ie. 8/1/06 to 8/14/06)... I am new to SQL and am perplexed... I have referenced several texts, but have not found a solution. Even being pointed in the right direction would be greatly appreciated!!
View 6 Replies
View Related
May 14, 2008
right now I have a stored procedure that goes through each of the Line and Body fields using a cursor. The problem is that this method is very slow. How would you experts solve this problem? any Hints or suggestions?
BEFORE
EXAMPLEPartLineBodySeriesEngineYear
11234A,BWETC1998
25678991,93,94,95WET01997
3345656S,R5,6,12WENC1995
AFTER
EXAMPLEPartLineBodySeriesEngineYear
11234AWETC1998
11234BWETC1998
25678991WET01997
25678993WET01997
25678994WET01997
25678995WET01997
3345656S5WENC1995
3345656S6WENC1995
3345656S12WENC1995
3345656R5WENC1995
3345656R6WENC1995
3345656R12WENC1995
View 4 Replies
View Related
Sep 29, 2000
Hi,
In my SQL server 7.0, I have got 250 store procedures in each database.
Before using them for my application, I want to ecyption all.
I must add "WITH ENCRYPTION" string in each SP in all database and it'll take me a long time. Is there fastest way to encryption all SPs in all DBs? Have anyone got an utility SP ( or anyway else) to do this?
Thanks in advance.
View 1 Replies
View Related
Jul 20, 2005
In relation to my last post, I have a question for the SQL-gurus.I need to update 70k records, and mark all those updated in a specialcolumn for further processing by another system.So, if the record wasKey1, foo, foo, ""it needs to becomeKey1, fap, fap, "U"iff and only iff the datavalues are actually different (as above, foobecomes fap),otherwise it must becomeKey1, foo,foo, ""Is it quicker to :1) get the row of the destination table, inspect all valuesprogramatically, and determine IF an update query is neededOR2) just do a update on all rows, but addingand (field1 <> value1 or field2<>value2) to the update querythat isupdate myTablesetfield1 = "foo"markField="u"where key="mykey" and (field1 <> foo)The first one will not generate new update queries if the record hasnot changed, on account of doing a select, whereas the second versionalways runs an update, but some of them will not affect any lines.Will I need a full index on the second version?Thanks in advance,Asger Henriksen
View 2 Replies
View Related
Nov 26, 2014
I'm running a pre-defined script which used to work fine on a file which was resupplied every month. below is the script used and the error message. looking at the error I assume that there is a rouge record within the file but have looked in Textpad and cannot find it.
UPDATE [matching].[dbo].[hot_nov] SET [AOV] = (CAST([Demand] AS DECIMAL)/CAST([Orders] AS INT)) WHERE [Demand] <> '';
UPDATE [matching].[dbo].[hot_nov] SET [POST2] = left([PostCode], PATINDEX('%[0-9]%', [PostCode] + '1') - 1) ;
UPDATE [matching].[dbo].[hot_nov] SET [POST4] = left([PostCode],LEN([PostCode])-4);
UPDATE [matching].[dbo].[hot_nov] SET [POST6] = left([PostCode],LEN([PostCode])-2);
error message
(1000 row(s) affected)
Msg 245, Level 16, State 1, Line 181
Conversion failed when converting the varchar value '2014-09-03 00:00:00' to data type int.
View 5 Replies
View Related
Dec 26, 2000
Hi,
1)I need to transfer 500 gb of data from one server to other, which is faster, DTS/BCP/Restore.
2)Which are the best methods for checking blocking, dead locks & Indexes!
Thanks you all in advance
Richard..
View 5 Replies
View Related
Nov 5, 2004
I have a master table which has demographic data such as name, dob, location along with a primary key id. It will have about 10-12000 records. We get a refresh file every hour which may or may not have corrections for these records hourly with about 3,000 records. I put this data into a table. This data should be considered always to be correct. To handle the update to the master table I need to create an update process. I can take one of two approaches, just update all the records in the master table regardless if they are correct or not, or do some type of left join on those that do not match (in other words, only update the ones where thae names or dob don't match) There is an underlying update trigger on the patient master which will also fire if these values are changed. An opinions on a best approach?
View 1 Replies
View Related
Feb 2, 2006
Hi,
I have a production server that has an 8Gb db. It is dual Xeon with 5x HDD - 2 mirrored and 3 striped. db on stripe, log and OS on mirror. 2x Gb network cards.
The application goes slow (ie users notice) when a backup is running so i have placed a crossover cable from one NIC to a test server so that it can back up to a HDD on that server, and then to tape. The test server has 2xGb NIC and the link between the two servers is on a seperate subnet to However, in the first trial of this the back up and verify takes 3 minutes longer.
Is this because the target server doesnt have a disk stripe?
What is the best config for the production server (ie will a slower backup but to another server be less load to contend with the application)?
thanks
Fatherjack
View 2 Replies
View Related
May 22, 2007
I've got a view that is driven from a 80 million record table in a data warehouse. I am trying to populate an aggregate table in a datamart, but am running into preformance problems. The datamart table needs to be updated daily. I understand there are many factors that effect performance, but in general would the fastest approach be:
1) Truncate the datamart table
2) Perform a bcp of the view to a text file
3) Bulk Insert to the datamart table
If you need more information to answer this please let me know.
Thanks,
Matt
View 7 Replies
View Related
Jul 23, 2005
Hi!We have Sql Server 2000 in our server (NT 4). Our database have now about+350.000 rows with information of images. Table have lot of columnsincluding information about image name, keywords, location, price, colormode etc. So our database don?t include the images itself, just a path tothe location of every image. Keywords -field have data for example likethis:cat,animal,pet,home,child with pet,child. Now our search use Full-TextSearch which sounded like good idea in the beginning but now it have hadproblems that really reduce our search engine?s performance. Also searchresults are not exact enough. Some of our images have also photographer?sname in keywords -column and if photographer?s name is, for example, PeterMoss, his pictures appears in web-page when customer want to search "moss"(nature-like) -pictures.Another problem is that Full-Text Search started to be very slow when queryresult contains thousands of rows. When search term gives maximum 3000rows, search is fast but larger searches take from 6 to 20 seconds tofinish which is not good. I have noticed also that first search is alwaysvery slow, but next ones are faster. It seems that engine is just"starting" when first query started to run.Is there better and faster way to handle the queries? Is it better torebuild the database somehow and use another method to search than Full-Text Search? I don?t know how to handle the database other way when everyimage have about 10 to even 50 different keywords to search.We have made web interface and search code with Coldfusion. ColdfusionServer then take care of sending all queries to Sql Server.I hope that somebody have some idea how to speed up our picture search.--Message posted via http://www.sqlmonster.com
View 2 Replies
View Related
Oct 21, 2007
Im trying to dedupe a table with only one field on it. The table has40 million records in it. What is the fastest way?1) create a table with a unque constraint on it insert into thattable?2) create a table without a unique constraint on it and use insertinto table select distinct un from table2?3) another way?Michael
View 1 Replies
View Related
Jan 10, 2008
Can anyone point to a reference which documents the pros and cons of the various connection protocols, such as Shared Memory vs. TCP/IP? I thought I saw something indicating that shared memory is fastest, which would explain why this protocol is tried first, but now I can't find it. This resource has information on creating connection strings, but not the advantages and disadvantages.
http://msdn2.microsoft.com/en-us/library/ms187662.aspx
View 7 Replies
View Related
Jan 11, 2007
Hi,
I'd like to know the fastest way to add named instances to SQL server 2005 (I need to add 5 named instances)
Thank you!
John
View 4 Replies
View Related
Nov 17, 2006
I am trying to find some info on the fastest connection transport for an app that is running on the same box as a SQL 2005 instance. The app does a large number of updates (high volume of data).. win32 using native ODBC. I am trying to find info on which connection mechanism is best... socket, pipes, etc. I read somewhere that there is a file mapped method available but i cannot find info on that either. Also, is there any performance difference between the old SQL OCBD drive and the new SQL Native Client OCBD drive?
Thanks.
View 2 Replies
View Related
Jul 30, 2007
Hi All,
Im bulk loading a ton of data into MSSQL SERVER 2005 Standard Edition. I used to do this process in version 2000. It seems there is some more overhead in 2005. Is there a way to drop logging to almost null to speed up insert?
This is my current sql statment to load data.
EXEC sp_dboption 'my_stuff', 'select into/bulkcopy', 'true'
SET ANSI_WARNINGS OFF
BULK INSERT mystuff.dbo.[v1]
FROM 'c:myfile.txt'
WITH
(
FIRSTROW = 1,
FORMATFILE = 'c:scriptsv1.fmt',
MAXERRORS=2000,
ROWS_PER_BATCH=100000
)
Thanks,
Mike
View 2 Replies
View Related
Apr 6, 2015
I have 2 tables, one is table A which stores Resources Assign to work for a certain period. The structure is as below
Name StartDate EndDate
Tan 2015-04-01 08:30:00.000 2015-04-01 16:30:00.000
Max 2015-04-01 08:30:00.000 2015-04-01 16:30:00.000
Alan 2015-04-01 16:30:00.000 2015-04-02 00:30:00.000
The table B stores the item process time. The structure is as below
Item ProcessStartDate ProcessEndDate
V 2015-04-01 09:30:10.000 2015-04-01 09:34:45.000
Q 2015-04-01 10:39:01.000 2015-04-01 10:41:11.000
W 2015-04-01 11:44:00.000 2015-04-01 11:46:25.000
A 2015-04-01 16:40:10.000 2015-04-01 16:42:45.000
B 2015-04-01 16:43:01.000 2015-04-01 16:45:11.000
C 2015-04-01 16:47:00.000 2015-04-01 16:49:25.000
I need to select the item which process in 2015-04-01 16:40:00 and 2015-04-01 17:30:00. Beside that I need to know how many resource is assigned to process the item in that period of time. I only has the start date is 2015-04-01 16:40:00 and end date is 2015-04-01 17:30:00. How I can select the data from both tables. There is no need for JOIN, just seperate selections.
Another item process time is in 2015-04-01 10:00:00 and 2015-04-04 11:50:59.
The result expected is
Table A
Name StartDate EndDate
Alan 2015-04-01 16:30:00.000 2015-04-02 00:30:00.000
Table B
Item ProcessStartDate ProcessEndDate
A 2015-04-01 16:30:10.000 2015-04-01 16:32:45.000
B 2015-04-01 16:33:01.000 2015-04-01 16:35:11.000
C 2015-04-01 16:37:00.000 2015-04-02 16:39:25.000
Scenario 2 expected result
Table A
Name StartDate EndDate
Tan 2015-04-01 08:30:00.000 2015-04-01 16:30:00.000
Max 2015-04-01 08:30:00.000 2015-04-01 16:30:00.000
Table B
Item ProcessStartDate ProcessEndDate
Q 2015-04-01 10:39:01.000 2015-04-01 10:41:11.000
W 2015-04-01 11:44:00.000 2015-04-01 11:46:25.000
View 8 Replies
View Related