Transact SQL :: Copy ALL Rows From DB Table Out To Txt File
Jul 29, 2015
Copy out all data from a DB table into/across delimited text file(s) ensuring that each text file size is no more than 3MB.Have created a SSIS solution where it achieves this requirement ..well sort of achieves the requirement ... Here it what the current solution (sparing the minute details) does in a nutshell & Problems with it:
1) Created a function (Script below) which finds the
maximum row size in bytes in a given DB table & uses it to calculate
how many rows can be copied out into a text file without exceeding 3MB size limit.
For instance: A DB table selected had 788 rows in total and this function for this particular table returned a value of 181 rows { select [dbo].[udf_GetRowPartitionNumber](‘<TableName>’)as #ofRowstoPartitionTableby --181} meaning in order to not exceed the requirement of 3MB per text file, we had to copy all the data from DB table across (create) 5 text files {Select
CEILING(788
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2 select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB] GO /****** Object: Trigger Script Date: 24/07/2015 3:41:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
Hi to all. I have a starnge question but i need to work out the situation that I'll describe here: i need to replicate one row in my table to the same table but with small changes in some fileds. I mean - copy the row and to insert it again to the same table, that I copy it from, with changes in fields values. Do you know any way i can to this - temporary table/ store procedure and so on... BUT in one shot - one action for the select and insert and update operations.
PeopleID in People Table is the primarykey and foreign Key in PeopleCosts Table. PeopleID is an autonumber
The major fields in People Table are PeopleID | MajorVersion | SubVersion. I want to create a new copy of data for existing subversion (say from sub version 1 to 2) in the same table. when the new data is copied my PeopleID is getting incremented and how to copy the related data in the other table (PeopleCosts Table) with the new set of PeopleIDs..
I am using stored procedure to load gridview but problem is that i am not getting all rows from first table[ Subject] on applying conditions on second table[ Faculty_Subject table] ,as you can see below if i apply condition :-
Faculty_Subject.Class_Id=@Class_Id
Then i don't get all subjects from subject table, how this can be achieved.
Sql Code:- GO ALTER Proc [dbo].[SP_Get_Subjects_Faculty_Details] @Class_Id int AS BEGIN
This is the error I get with a simple Data Reader Source (SELECT * FROM A) to SQL Server Destination copy between identical tables from a linked to a local server:
ID Flag TestDate Value Comment 111 2 12/15/2014 7.5 null 222 2 Null 10 received
Matrix_Current table could have 1 or multiple rows as below.
ID Flag TestDate Value Comment 111 2 01/26/2015 7.9 111 2 02/23/2015 7.9 111 2 04/07/2015 6.8 222 1 null 8 test comment 1 222 3 null 9 test comment 2
When I run below update
UPDATE AM SET M.Flag = MC.Flag, M.TestDate = MC.TestDate, M.Value = MC.Value, M.comment = MC.Comment FROM dbo.Matrix M inner join dbo.Matrix_Current MC on M.ID = MC.ID
Matrix table has value below:
ID Flag TestDate Value Comment 111 2 01/26/2015 7.9 222 1 Null 8 test comment 1
I want to update Matrix table from all row from Matrix_Current, final table would like below:
ID Flag TestDate Value Comment 111 2 04/07/2015 6.8 222 3 Null 9 test comment 2
I have a question regarding the total count of the table rows
Select count (name) from test. Lets say I have got 200 count. And Select count (lastname) from test1.I have got 200.And this counts I should store it in "There are nn name items awaiting your attention and nn pending lastname's awaiting your approval".
I am using Sql Server 2008 R2.I have a existing query that basically says
Select Top 50 Subscriber_ID, Member_Name, Group_ID from my_table order by rand(checksum(newid()))
However the client now wants to have at least two from each group_id. There are 17 different groups. When I run this as is I get about six of the 17 groups in the results. How can I change this to get at least two results from each group_id?
ID (int, identity) Name (nvarchar(255)) Block (nvarchar(50)) Street (nvarchar(255)) Floor (nvarchar(50)) Unit (nvarchar(50)) Address1 (nvarchar(255)) Address2 (nvarchar(255))
I want to iterate through the table and populate Address1 as [Block] [Street] #[Floor]-[Unit].If the 'Floor' field contain a number < 10 (e.g., '8'), I want to add a '0' before it (e.g., '08'). Same for Unit.How would I do this using cursors (or other recommended method)?
I've a table with more columns and 1 identifier. I need to write this table when a modified row is detecting respect to the columns not to the identifier.
So I've created a temporary table to put the potential rows to write on the real table, but I want to detect the modified rows. I've thought to use the checksum function, but I don't know how to use it and if it could be useful in this scenario.
Moreover, in the temporary table I've collected daily the rows to write: the first day a row could have a value respect to his columns, the next day a different value and the next one the same value respect to the first day.
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
And I would like to produce the following outcome to the same table (using update statement): As what you all observe, it merge all overlapping dates based on same promotion ID by taking the minimum start date and maximum end date. Only the first row of overlapping date is updated to the desired value and the flag value change to 1. For other overlapping value, it will be set to NULL and the flag becomes 2.
The second part that I would like to acheive is based on the first table as well. However, this time I would like to merge the date which results in the minimum start date and End Date of the last overlapping rows. Since the End date of the last overlapping rows of promotion ID 1 is row with ID 4 with End Date 2015-05-29, the table will result as follow after update.
Combination of Student_Id, Subject_Id and Quarter columns is the primary key. One student can take one subject in a quarter. Now the new requirement is a student can take multiple subjects in a quarter. So need to add another table like below:
NEW table name: Student_Subject and column are below: Student_id Subject_Id Quarter1
All the above three columns combination is primary key.
After the new table Student_Subject created, remove Subject_Id column from Student table.
When the user clicks on a button after selecting multiple subjects and provide col1 and col2 data then one row gets inserted into Student table and multiple rows gets inserted into Student_Subject table.
Is there any other table design that satisfies one student can take multiple subjects in a quarter?
I have a temp table with the following columns and data
drop table #temp create table #temp (id int,DLR_ID int,KPI_ID int,Brnd_ID int) insert into #temp values (1,2343,34,2) insert into #temp values (2,2343,34,2) insert into #temp values (3,2343,34,2)
[Code]....
I use the rank function on that table and get the following results
select rank() over (order by DLR_ID,KPI_ID,BRND_ID ) Rown,* from #temp
I am interested only in Rown and Id columns. For each Rown number, I need to get the min(ID) in the second column and the duplicate ID should be in 3rd column as shown below.If i have 3 duplicate IDs , I should have 3 rows with 2nd column being the min(id) and 3rd column having one of the duplicate ids in ascending order(as shown in Rown=6)
Now we have different packages for 4 tables data loading. These 4 packages will start at a time. Before going to load the data we have to make the Flag to 1 and after that we have to load it. Because of this we have written Update statement to update the Value to 1 in respective Package.
Now we are getting dead lock because we are using same table at a same time. Because we are updating different records.
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
Setup: Windows Server 2003 R2 - Enterprise - SP2 - 32 Bit SQL Server 2014 Express - 32 Bit
Problem: I have a calculated field on a PO table that adds up item prices on an Item table to get the total PO value. This works as expected until there are at least 10 rows in the PO table. From the 10 row on the calculated field stops working and only shows 0.
I have experienced this before and it seems like calculated fields break on the 10th row of a table and onward.
My PO table CREATE TABLE [dbo].[PO]( [ID] [int] IDENTITY(1,1) NOT NULL, [Quote_Number] [varchar](max) NULL, [Customer] [varchar](max) NULL, [CustomerPO] [varchar](max) NULL, [PO_Received_Date] [datetime] NULL, [Total_PO_Value] [decimal](18, 2) NULL,
I am looking for a Sql query to verify the inserted values from one table(in CSV file) with another table(in sql database)
For example: I have below Values column that is present in once CSV file, after my data migration the values get stored in Results table under Message column.
I need to very whether values(1X,1Y) are inserted in Message record "successfully inserted value 1X"
Values (CSV) 1X 1Y
Results Table(SQL) CreatedDate Message 2015-08-04 08:45:29.203 successfully inserted value 1X 2015-08-04 08:44:29.103 TEst pass 2015-08-04 08:43:29.103 successfully inserted value 1X 2015-08-04 08:42:29.203 test point 2015-08-04 08:35:29.203 successfully inserted value 1Y 2015-08-04 08:30:29.203 Test Pass 2015-08-04 08:28:29.203 successfully inserted value 1Y
If all values are inserted:
Output: All values from values table are inserted successfully Total count of values inserted: 2 If only few values are inserted, example only 1X from Values table is inserted in Message
Example: Results Table CreatedDate Message 2015-08-04 08:45:29.203 successfully inserted value 1X 2015-08-04 08:44:29.103 TEst pass 2015-08-04 08:43:29.103 successfully inserted value 1X 2015-08-04 08:42:29.203 test point
Output: All values from values are not inserted successfully in result table. Total count of values inserted: 1 Missing Values not inserted in results table are: 1Y
I have a table at my initiator, i have a copy of this table at my target, every morning i want to refresh the table at the target with the data from the initiator via service broker.
What would be the best way to do this?
One major consideration would be that this table would be very large +- 1.6 million rows.
I was thinking of exporting the table to file like a csv, compressing the file and then sending it via service broker. Then at the target i would uncompress and bulk copy the data into the table at the target.
But can this be done?
How would one go about sending a file via service broker?
For example if i wanted to send a .csv or something via service broker how would i do it?
I know i can send binary data, but i would have to equate the msg to a file on disk, and how exactly would a recieve this file on the other side and make sure it went to disk at the target, i am guessing it is a mixture of xp_cmdshell and service broker.
Can this work?
Or is there a much simpler way? Any thoughts would be appreciated.
Begin Truncate table A Insert into A (Col1, Col2, Col3... ) Select Value1, Value2, Value3... From Table B End
The insert operation query takes approximately 3.5 minutes to execute. What's occurring is the Table is immediately truncated, and there are no rows in the table for those 3.5 minutes.
How can I avoid having this gap - where there are no rows in the table for that period of time during the job execution ? The table could be locked, but that doesn't seem like the best solution.
I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.
In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.
For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.
Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?
Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall') BEGIN ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0 IF ( @@ERROR <> 0 ) GOTO QuitWithRollback END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.