Transact SQL :: Copy ALL Rows From DB Table Out To Txt File
Jul 29, 2015
Copy out all data from a DB table into/across delimited text file(s) ensuring that each text file size is no more than 3MB.Have created a SSIS solution where it achieves this requirement ..well sort of achieves the requirement ... Here it what the current solution (sparing the minute details) does in a nutshell & Problems with it:
1) Created a function (Script below) which finds the
maximum row size in bytes in a given DB table & uses it to calculate
how many rows can be copied out into a text file without exceeding 3MB size limit.
For instance: A DB table selected had 788 rows in total and this function for this particular table returned a value of 181 rows { select [dbo].[udf_GetRowPartitionNumber](‘<TableName>’)as #ofRowstoPartitionTableby --181} meaning in order to not exceed the requirement of 3MB per text file, we had to copy all the data from DB table across (create) 5 text files {Select
CEILING(788
[code]....
View 4 Replies
ADVERTISEMENT
Feb 3, 2010
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
View 6 Replies
View Related
Jul 24, 2015
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2
select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB]
GO
/****** Object: Trigger Script Date: 24/07/2015 3:41:38 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
[code]....
View 3 Replies
View Related
Jul 12, 2000
Hi to all.
I have a starnge question but i need to work out the situation that I'll describe here:
i need to replicate one row in my table to the same table but with small changes in some fileds. I mean - copy the row and to insert it again to the same table, that I copy it from, with changes in fields values.
Do you know any way i can to this - temporary table/ store procedure and so on... BUT in one shot - one action for the select and insert and update operations.
Thanks
Sharon.
View 2 Replies
View Related
Nov 23, 2007
Hi All,
I have 2 tables People & PeopleCosts.
PeopleID in People Table is the primarykey and foreign Key in PeopleCosts Table. PeopleID is an autonumber
The major fields in People Table are PeopleID | MajorVersion | SubVersion. I want to create a new copy of data for existing subversion (say from sub version 1 to 2) in the same table. when the new data is copied my PeopleID is getting incremented and how to copy the related data in the other table (PeopleCosts Table) with the new set of PeopleIDs..
Kindly help. thanks in advance.
Myl
View 3 Replies
View Related
Aug 15, 2015
I am using stored procedure to load gridview but problem is that i am not getting all rows from first table[ Subject] on applying conditions on second table[ Faculty_Subject table] ,as you can see below if i apply condition :-
Faculty_Subject.Class_Id=@Class_Id
Then i don't get all subjects from subject table, how this can be achieved.
Sql Code:-
GO
ALTER Proc [dbo].[SP_Get_Subjects_Faculty_Details]
@Class_Id int
AS BEGIN
[code] ....
View 9 Replies
View Related
Nov 18, 2005
This is the error I get with a simple Data Reader Source (SELECT * FROM A) to SQL Server Destination copy between identical tables from a linked to a local server:
View 6 Replies
View Related
Jun 10, 2015
Matrix table has ID column and data below.
ID Flag TestDate Value Comment
111 2 12/15/2014 7.5 null
222 2 Null 10 received
Matrix_Current table could have 1 or multiple rows as below.
ID Flag TestDate Value Comment
111 2 01/26/2015 7.9
111 2 02/23/2015 7.9
111 2 04/07/2015 6.8
222 1 null 8 test comment 1
222 3 null 9 test comment 2
When I run below update
UPDATE AM
SET M.Flag = MC.Flag, M.TestDate = MC.TestDate,
M.Value = MC.Value, M.comment = MC.Comment
FROM dbo.Matrix M inner join dbo.Matrix_Current MC on M.ID = MC.ID
Matrix table has value below:
ID Flag TestDate Value Comment
111 2 01/26/2015 7.9
222 1 Null 8 test comment 1
I want to update Matrix table from all row from Matrix_Current, final table would like below:
ID Flag TestDate Value Comment
111 2 04/07/2015 6.8
222 3 Null 9 test comment 2
View 3 Replies
View Related
Sep 15, 2015
I have a question regarding the total count of the table rows
Select count (name) from test. Lets say I have got 200 count. And Select count (lastname) from test1.I have got 200.And this counts I should store it in "There are nn name items awaiting your attention and nn pending lastname's awaiting your approval".
So now I have to store the count in 'nn'.
View 5 Replies
View Related
Apr 17, 2012
I have a table for example like following
DECLARE @tmpTable table
(
name varchar(10),
address1 varchar(10),
phnno varchar(10),
mobno varchar(10)
)
INSERT INTO @tmpTable(name,address1,phnno,mobno)
[Code] ....
I want to remove all empty rows like row 1,2 and 3 in the above example.
I can't check all columns null values as there are many columns in my actual table.
View 6 Replies
View Related
Sep 2, 2015
I am using Sql Server 2008 R2.I have a existing query that basically says
Select Top 50 Subscriber_ID, Member_Name, Group_ID
from my_table
order by rand(checksum(newid()))
However the client now wants to have at least two from each group_id. There are 17 different groups. When I run this as is I get about six of the 17 groups in the results. How can I change this to get at least two results from each group_id?
View 6 Replies
View Related
Sep 10, 2015
I am a newbie to Sql server and i am having a task where my current table with always two rows existing and ID column as primary key looks like below:
ID PlateNo Type Image Name
27 455 User img1.jpg
32 542 Alternative img2.jpg
And i want a sql query to modify my table so that the data should be like as shown below:
ID PlateNo Type Image Name
27 542 Alternative img2.jpg
32 455 User img1.jpg
View 8 Replies
View Related
Jul 21, 2015
I have a table with more than 1000 records. The columns look like this
Import id | Reference | Date | Username | Filename
Some of the rows are duplicates except for the fact that they have different values for the first column alone (import id)
11111 test 21-07-2015 xxxxxx yyyyyy
22222 test 21-07-2015 xxxxxx yyyyyy
Is it possible to delete these kind of duplicate rows by ignoring the import id ?
View 9 Replies
View Related
May 13, 2008
Hello:
I want to copy one table not whole database to text file. How to do it? Using DTS could not allow me to select the specific table.
Thanks,
Snow:rolleyes:
View 3 Replies
View Related
Aug 11, 2015
I have a table with the following fields:
ID (int, identity)
Name (nvarchar(255))
Block (nvarchar(50))
Street (nvarchar(255))
Floor (nvarchar(50))
Unit (nvarchar(50))
Address1 (nvarchar(255))
Address2 (nvarchar(255))
I want to iterate through the table and populate Address1 as [Block] [Street] #[Floor]-[Unit].If the 'Floor' field contain a number < 10 (e.g., '8'), I want to add a '0' before it (e.g., '08'). Same for Unit.How would I do this using cursors (or other recommended method)?
View 4 Replies
View Related
Apr 17, 2015
I've a table with more columns and 1 identifier. I need to write this table when a modified row is detecting respect to the columns not to the identifier.
So I've created a temporary table to put the potential rows to write on the real table, but I want to detect the modified rows. I've thought to use the checksum function, but I don't know how to use it and if it could be useful in this scenario.
Moreover, in the temporary table I've collected daily the rows to write: the first day a row could have a value respect to his columns, the next day a different value and the next one the same value respect to the first day.
View 26 Replies
View Related
Aug 27, 2015
i have following table columns and i want to convert these all columns into row
SELECT [CASHINHAND]
,[CASHATBANK]
,[DEPOSITS]
,[ACCRECEVABLE]
,[SUNDRYDEBTORS]
,[LOANANDADVANCES]
[Code] ....
required output looks like
CASHINHAND 118950
CASHATBANK 200
DEPOSITS 3000
ACCRECEIVABLE
25000
View 3 Replies
View Related
Aug 11, 2014
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
View 9 Replies
View Related
Nov 6, 2006
Hi Guys,
What approach should I use to copy content of a text file.
Example of the text file's content:
Date, "20060101"
ST_Code, "101"
A_Code, P_Code, T_Code, amount, price
"0001", "1111", "0101", 550, 230
"0002", "1111", "0102", 345, 122
"2001", 0212", 0930", 410, 90
In the example above, I just want to copy the rows Date, "20060101" and ST_Code, "101" into a table.
Regards,
Lars
View 2 Replies
View Related
Dec 2, 2015
This question is extension from the topic Updating table Rows with overlapping dates: [URL] .....
I am actually having a table as following:
Table Name: PromotionList
Note that the NULL in endDate means no end date or infinite end date.
ID PromotionID StartDate EndDate Flag
1 1 2015-04-05 2015-05-28 0
2 1 2015-04-23 NULL 0
3 2 2015-03-03 2015-05-04 0
4 1 2015-04-23 2015-05-29 0
5 1 2015-01-01 2015-02-02 0
And I would like to produce the following outcome to the same table (using update statement): As what you all observe, it merge all overlapping dates based on same promotion ID by taking the minimum start date and maximum end date. Only the first row of overlapping date is updated to the desired value and the flag value change to 1. For other overlapping value, it will be set to NULL and the flag becomes 2.
Flag = 1, success merge row. Flag = 2, fail row
ID PromotionID StartDate EndDate Flag
1 1 2015-04-05 NULL 1
2 1 NULL NULL 2
3 2 2015-03-03 2015-05-04 1
4 1 NULL NULL 2
5 1 2015-01-01 2015-02-02 1
The second part that I would like to acheive is based on the first table as well. However, this time I would like to merge the date which results in the minimum start date and End Date of the last overlapping rows. Since the End date of the last overlapping rows of promotion ID 1 is row with ID 4 with End Date 2015-05-29, the table will result as follow after update.
ID PromotionID StartDate EndDate Flag
1 1 2015-04-05 2015-05-29 1
2 1 NULL NULL 2
3 2 2015-03-03 2015-05-04 1
4 1 NULL NULL 2
5 1 2015-01-01 2015-02-02 1
Note that above is just sample Data. Actual data might contain thousands of records and hopefully it can be done in single update statement.
Extending from the above question, now two extra columns has been added to the table, which is ShouldDelete and PromotionCategoryID respectively.
Original table:
ID PromotionID StartDate EndDate Flag ShouldDelete PromotionCategoryID
1 1 2015-04-05 2015-05-28 0 Y 1
2 1 2015-04-23 2015-05-29 0 NULL NULL
3 2 2015-03-03 2015-05-04 0 N NULL
4 1 2015-04-23 NULL 0 Y 1
5 1 2015-01-01 2015-02-02 0 NULL NULL
Should Delete can take Y, N and NULL
PromotionCategoryID can take any integer and NULL
Now it should be partition according with promotionid, shoulddelete and promotioncategoryID (these 3 should be same).
By taking the min date and max date of the same group, the following table should be achieve after the table is updated.
Final outcome:
ID PromotionID StartDate EndDate Flag ShouldDelete PromotionCategoryID
1 1 2015-04-05 NULL 1 Y 1
2 1 2015-04-23 2015-05-29 1 NULL NULL
3 2 2015-03-03 2015-05-04 1 N NULL
4 1 NULL NULL 2 Y 1
5 1 2015-01-01 2015-02-02 1 NULL NULL
View 2 Replies
View Related
Sep 17, 2015
I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:
Source Tables :
OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,
[Code] ....
The update requirement is as follows:
DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
SET rowcount 10000
update r
set Comp=t.Comp
[Code] ....
The update took more than 48h and didn't terminate , how to accelerate it ?
View 6 Replies
View Related
Nov 4, 2015
Existing table structure is below:
Table name: Student
Columns in Student are below:
Student_Id
Subject_Id
Quarter
Col1
Col2
Combination of Student_Id, Subject_Id and Quarter columns is the primary key. One student can take one subject in a quarter. Now the new requirement is a student can take multiple subjects in a quarter. So need to add another table like below:
NEW table name: Student_Subject and
column are below:
Student_id
Subject_Id
Quarter1
All the above three columns combination is primary key.
After the new table Student_Subject created,
remove Subject_Id column
from Student table.
When the user clicks on a button after selecting multiple subjects and provide col1 and col2 data then one row gets inserted into Student table and multiple rows gets inserted into Student_Subject table.
Is there any other table design that satisfies one student can take multiple subjects in a quarter?
View 15 Replies
View Related
Jun 30, 2015
I have a temp table with the following columns and data
drop table #temp
create table #temp (id int,DLR_ID int,KPI_ID int,Brnd_ID int)
insert into #temp values (1,2343,34,2)
insert into #temp values (2,2343,34,2)
insert into #temp values (3,2343,34,2)
[Code]....
I use the rank function on that table and get the following results
select rank() over (order by DLR_ID,KPI_ID,BRND_ID ) Rown,* from #temp
I am interested only in Rown and Id columns. For each Rown number, I need to get the min(ID) in the second column and the duplicate ID should be in 3rd column as shown below.If i have 3 duplicate IDs , I should have 3 rows with 2nd column being the min(id) and 3rd column having one of the duplicate ids in ascending order(as shown in Rown=6)
View 7 Replies
View Related
Jul 16, 2015
We have control table which will be useful whether we need to start the job or not. If we are starting the Job we will make it to 1.
Below is the Table Structure.
Table Name IN_USE_FG
CUST_D 0
PROD_D 0
GEO_D 0
DATE_D 0
Now we have different packages for 4 tables data loading. These 4 packages will start at a time. Before going to load the data we have to make the Flag to 1 and after that we have to load it. Because of this we have written Update statement to update the Value to 1 in respective Package.
Now we are getting dead lock because we are using same table at a same time. Because we are updating different records.
View 6 Replies
View Related
Jul 29, 2015
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC
SET LKC.combo = lockCombo2
FROM [LockerPopulation] A
JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type
JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
[Code] ....
View 10 Replies
View Related
Nov 4, 2015
Setup:
Windows Server 2003 R2 - Enterprise - SP2 - 32 Bit
SQL Server 2014 Express - 32 Bit
Problem: I have a calculated field on a PO table that adds up item prices on an Item table to get the total PO value. This works as expected until there are at least 10 rows in the PO table. From the 10 row on the calculated field stops working and only shows 0.
I have experienced this before and it seems like calculated fields break on the 10th row of a table and onward.
My PO table
CREATE TABLE [dbo].[PO](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Quote_Number] [varchar](max) NULL,
[Customer] [varchar](max) NULL,
[CustomerPO] [varchar](max) NULL,
[PO_Received_Date] [datetime] NULL,
[Total_PO_Value] [decimal](18, 2) NULL,
[Code] ....
View 10 Replies
View Related
Aug 4, 2015
I am looking for a Sql query to verify the inserted values from one table(in CSV file) with another table(in sql database)
For example: I have below Values column that is present in once CSV file, after my data migration the values get stored in Results table under Message column.
I need to very whether values(1X,1Y) are inserted in Message record "successfully inserted value 1X"
Values (CSV)
1X
1Y
Results Table(SQL)
CreatedDate Message
2015-08-04 08:45:29.203 successfully inserted value 1X
2015-08-04 08:44:29.103 TEst pass
2015-08-04 08:43:29.103 successfully inserted value 1X
2015-08-04 08:42:29.203 test point
2015-08-04 08:35:29.203 successfully inserted value 1Y
2015-08-04 08:30:29.203 Test Pass
2015-08-04 08:28:29.203 successfully inserted value 1Y
If all values are inserted:
Output:
All values from values table are inserted successfully
Total count of values inserted: 2
If only few values are inserted, example only 1X from Values table is inserted in Message
Example:
Results Table CreatedDate Message
2015-08-04 08:45:29.203 successfully inserted value 1X
2015-08-04 08:44:29.103 TEst pass
2015-08-04 08:43:29.103 successfully inserted value 1X
2015-08-04 08:42:29.203 test point
Output:
All values from values are not inserted successfully in result table.
Total count of values inserted: 1
Missing Values not inserted in results table are: 1Y
View 3 Replies
View Related
Aug 22, 2006
Hi There
I have a table at my initiator, i have a copy of this table at my target, every morning i want to refresh the table at the target with the data from the initiator via service broker.
What would be the best way to do this?
One major consideration would be that this table would be very large +- 1.6 million rows.
I was thinking of exporting the table to file like a csv, compressing the file and then sending it via service broker. Then at the target i would uncompress and bulk copy the data into the table at the target.
But can this be done?
How would one go about sending a file via service broker?
For example if i wanted to send a .csv or something via service broker how would i do it?
I know i can send binary data, but i would have to equate the msg to a file on disk, and how exactly would a recieve this file on the other side and make sure it went to disk at the target, i am guessing it is a mixture of xp_cmdshell and service broker.
Can this work?
Or is there a much simpler way? Any thoughts would be appreciated.
Thanx
View 6 Replies
View Related
Sep 29, 2015
I would like to perform autodeletion operation from my table using stored procedure....
Here is my table structure:
Emp.id | emp_name | dept | desig | time_out | date | emp_sign |
if the user inserts new employee record that record should delete automatically from original after
30 minutes..I don't want to keep that newly inserted record after
30 minutes...I don't know how to achieve this...
View 12 Replies
View Related
Oct 22, 2015
I have a job which executes hourly.
Essentially:
Begin
Truncate table A
Insert into A
(Col1,
Col2,
Col3...
)
Select Value1,
Value2,
Value3...
From Table B
End
The insert operation query takes approximately 3.5 minutes to execute. What's occurring is the Table is immediately truncated, and there are no rows in the table for those 3.5 minutes.
How can I avoid having this gap - where there are no rows in the table for that period of time during the job execution ? The table could be locked, but that doesn't seem like the best solution.
View 8 Replies
View Related
May 12, 2015
I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.
In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.
For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.
Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?
Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).
View 6 Replies
View Related
Apr 24, 2013
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
View 4 Replies
View Related
Aug 2, 2007
Hi,
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.
Thank you,
Aldo
View 1 Replies
View Related