Tranferring Huge Data To Various Tables
Mar 13, 2007
Actually in my transformation i am transferring huge amount of data.
i have been using oledb command finally to dump my incoming data to respective tables.
For Example :
if you have two tables
table 1,table 2
in my incoming data i have a lookup and check for two unique columns with that of the unique columns in the table 1.if the record does not exsist i try inserting a record into table 2 and get the unique filed of the record and store that in particular column of table 1.
the data is very large an is this the better why or any suggesstions do let me know..
View 5 Replies
ADVERTISEMENT
Oct 12, 2006
Hi,
I have a situation where I have 4 tables:
1. 2 Dimensional tables(Parent), DIM1 with 50000 rows, and DIM2 with 1000 rows
2. Fact 1 with 50 columns, 25 Million rows and with FK to DIM1 and DIM2
3. Fact 2 with 40 columns, and 25 Million rows and with FK to DIM1 & DIM2 tables.
Actually the fact 1 and fact 2 have same related data but since our Analysis cube person wanted the fact table not to have more than 50 columns we divided the tables into 2, but they have the same compound key.
Above said, I have a situation where I have to select all the columns, in both fact tables, and do a group by. I wrote the query and ran "Analyze Query in the Database Engine Tuning Advisor" for it. It gave bunch of recomendations about the statistics and indexes which I created. When I executed the query the result came up in matter of seconds, which was good.
In the query I had a condition having MarketName='Bridgeview' and DateID = 344 (FK of today-1).
When I wanted the data for last 30 days I changed to DateID in ( > FK of today -32 and < FK of today), the query responded and worked fine.
But when I changed the query to get MarketName='Aurora' (other than I used when I ran Tuning Advisor), the result returned is empty set. When I removed the MarketName condition, it is supposed to return all markets' data, but it returns only Bridgeview data.
I know the data is in the table for all markets, since reports are rendered from these fact tables for all of these markets(also ran queries to check the fact table data).
I am unable to point out the reason why the query behaves like this. It responds to the date change, but not to the MarketName change.
I really appreciate if anyone can help me point out the problem.
Thanks,
Venkat
View 3 Replies
View Related
May 25, 2006
I am trying to transfer data into SQL Server from Excel using DTS.
I get error for a particular field:
Data for Source Column 3('Col3') is too large for the specified buffer size.
I gathered from other resources, that this error is due to some record in the excel sheet for which that particular field is too big.
and it reads only the first 8 records to determine a data type.
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q281517
the fix is to chnage the registry value, as mentioned in the above link. But hat is not an option for me, as that slows down the process a lot.
Does any body else has any other suggestion for this?
I have an idea, that worked, but for that I have to manually insert a record on the top in the excel sheet, with a big size data in that field (which gives problem), and then run DTS, later delete that record.
To automate this idea, I tried to do a DTS from a dummy table (with a record with big data in it) to this excel sheet, but it puts the record at the bottom of the sheet. Is there any way to put this record on the top?
Any help in this regard will be greatly appreciated.
Thanks
Nicky
View 3 Replies
View Related
Jul 30, 2007
Hi All
I have this table which has about 30 thousand records
I want to send all the records everyday at night to the other interface(TIBCO) in form of XML.
I can send all the records at once or maybe 10 records at a time
1. If I make one XML with all the records and send it, its risky as XML would be quite big, also it would be time consuming and would generate a big load on the program(.NET program will create XML). Also, any abnormal error would kill the whole process and we would have to start again
2. Processing 10 or 20 records at a time and then sending the XML(with these 10 or 20 records) and doing this till all the records are sent. If there is any error in between in the program(.NET), the program can start from that step again. Now the problem is how I can keep track of the record number already sent to the other interface
Can someone help?
Thanks
View 4 Replies
View Related
Aug 15, 2012
I need to compare if two developers did the job correctly and created identical tables.
The problem is more complex, but I will try to solve it somehow if I solve the problem of comparing two tables (let them be in different SQL Server 2008 databases) and their properties. No data needs to be compared.
View 6 Replies
View Related
Jul 20, 2005
Hi All,I've the following table with a PK defined on an IDENTITY column(INSERT_SEQ):CREATE TABLE MYDATA (MID NUMERIC(19,0) NOT NULL,MYVALUE FLOAT NOT NULL,TIMEKEY INTEGER NOT NULL,TIMEKEY_DTTM DATETIME NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) IDENTITY(1,1) NOT NULL)GOALTER TABLE MYDATAADD CONSTRAINT PK_MYDATAPRIMARY KEY (INSERT_SEQ)GOThe TIMEKEY_DTTM field is generated, from the value actually insertedinto theTIMEKEY field, by the following trigger:CREATE TRIGGER TIMEKEY1ON MYDATAFOR INSERT ASBEGINDECLARE @M_TIMEKEY_DTTM DATETIMESELECT @M_TIMEKEY_DTTM = DATEADD(SECOND, INS.TIMEKEY +EP.GMT_OFFSET * 0 ,'1970-01-01 00:00:00.000')FROM INSERTED INS, LOCATIONINFO EPWHERE INS.EID = EP.EIDUPDATE MYDATASET TIMEKEY_DTTM = @M_TIMEKEY_DTTMFROM INSERTED INS, MYDATA MDWHERE MD.INSERT_SEQ = INS.INSERT_SEQENDGOThere is also a composite, non unique, index defined on thetuple:(MID,IID,TIMEKEY,EID)CREATE INDEX IX_METDATA ON MYDATA (MID,IID,TIMEKEY,EID)GOAs a consequence of an application design change, I would also changethis index to be UNIQUE, but when I try to drop and create it I get anerror, because the tables stores some duplicated rows...In order to succesfully upgrade the index definition, I wrote some DMLstaementsto lookup and remove the duplicated rows, keeping only the firstrecord inserted, i.e. the one with the lowest INSERT_SEQ:---- This table stores then umber of duplicated records eventuallydiscovered-- into the MYDATA table; the initial value for the NUM_DUPLICATESfield is-- 0 (no duplicated record)--DROP TABLE DUPLICATESGOCREATE TABLE DUPLICATES (TABLENAME VARCHAR(17),NUM_DUPLICATES NUMERIC(19,0) )GOINSERT INTO DUPLICATES VALUES ('MYDATA',0)GOINSERT INTO DUPLICATES VALUES ('CATEGORIESDATA',0)GO---- ///////// CLEAN UP OF MYDATA TABLE--DROP TABLE TMP_MYDATAGOCREATE TABLE TMP_MYDATA (MID NUMERIC(19,0) NOT NULL,TIMEKEY INTEGER NOT NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) )GO---- Insert into the TMP_MYDATA table all the duplicated records for-- the tuple (MID,IID,TIMEKEY,EID) and NULL for the INSERT_SEQ field--INSERT INTO TMP_MYDATA (MID,IID,TIMEKEY,EID)SELECT MID,IID,TIMEKEY,EIDFROM MYDATAGROUP BY MID,IID,TIMEKEY,EIDHAVING COUNT(*)>1GO---- Updates the INSERT_SEQ field to the lowest value in the group-- of duplicated records--UPDATE TMP_MYDATASET TMP_MYDATA.INSERT_SEQ = (SELECT MIN(INSERT_SEQ)FROM MYDATAWHERE TMP_MYDATA.MID = MYDATA.MID ANDTMP_MYDATA.IID = MYDATA.IID ANDTMP_MYDATA.TIMEKEY = MYDATA.TIMEKEY ANDTMP_MYDATA.EID = MYDATA.EID )GO---- Updates the value of NUM_DUPLICATES for the MYDATA table.--UPDATE DUPLICATESSET NUM_DUPLICATES = (SELECT COUNT(*) FROM TMP_MYDATA)WHERE TABLENAME = 'MYDATA'GO---- Delete from the MYDATA table all the duplicated records,-- keeping only the row with the lowest INSERT_SEQ-- The delete is performed only if there are duplicated recors;-- this is achieved using a "short circuit" AND on the number ofrecords-- stored into the NUM_DUPLICATES field of the DUPLICATES table for-- the MYDATA table...--DELETE FROM MYDATAWHERE ( SELECT NUM_DUPLICATES FROM DUPLICATES WHERE TABLENAME ='MYDATA') > 0 ANDEXISTS ( SELECT 1FROM TMP_MYDATAWHERE MYDATA.MID = TMP_MYDATA.MID ANDMYDATA.IID = TMP_MYDATA.IID ANDMYDATA.TIMEKEY = TMP_MYDATA.TIMEKEY ANDMYDATA.EID = TMP_MYDATA.EID ANDMYDATA.INSERT_SEQ > TMP_MYDATA.INSERT_SEQ )GOThis tecnique works fine on a normal table (1M recs) but is not veryperformanton huge tables (>10M records)!Do you know a better way to achieve the task of removing all theduplicates records, preserving the lowest INSERT_SEQ betwee theduplicates and also preserving the sequence seed, so that a new recordinserted at time t1>t0 is enumerated with an INSERT_SEQ|t1 >max(INSERT_SEQ)|t0 ?Thanks a lot for your help!PatrizioPS. sorry for such a large post!
View 1 Replies
View Related
Nov 19, 2015
I have many tables, and I just want to print the relationships between them. The ones without foreign keys to primary key relations are irrelevant. I made a diagram of all tables in sql server management studio, and it shows the key relations, but its a very large diagram horizontally and vertically. Is there a way to print the whole thing so that it doesn't take endless pages that I don't know how to piece together?
View 3 Replies
View Related
Apr 21, 2007
I am in the process of designing a SQL 2005 database with tables that may hold several hundreds of millions of rows.
Due to various constraints, I am trying to save as much space as possible by optimizing the size of a row. Currently one row contains the following columns:
byte(4), byte(3), byte(3) = 10 bytes.
Adding 4 bytes for the row header, plus 3 bytes for the null bitmap (as described in BOL) I am ending up with 17 bytes/row. In a real world test it was an average of 18.3 bytes/row.
There are no indexes, no primary key and all columns are not NULL. Hence my question: since no column allows a null value, is there a possibility to "remove" the 3 bytes Null Bitmap?
Is there any other way to shave off one or two more bytes by using a clustered index etc?
Thanks
View 4 Replies
View Related
Jun 7, 2006
I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios:
Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.
Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.
I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.
The result of this query would be exactly what I need to import.
How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.
In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server.
I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do.
Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!
Thanks in advance for your suggestions!
View 9 Replies
View Related
Jul 7, 2015
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.
View 6 Replies
View Related
Feb 20, 2008
hi all,
I am using sql server 2005.
I making one web form where data from one table(temporary table) is moved to three table two is header and one is detail. i would like to know is making a package is better option or directing writing procedure to move data in respective table is good
perfomancewise which is fast.or if any other better option is there plaese guide me.
If making ssis packge is better please give me useful.
please help.
Thank you
View 3 Replies
View Related
Apr 3, 2000
SQL 7 SP1 NT4 SP5
I have a TRANSACTION table with 150 million rows.
I have a USER table.
Each user has about 600 records in the TRANSACTION table.
The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.
The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.
I want to go through the USER table and delete all users who have not visited me in a while.
I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.
The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.
Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?
Thanks !!!
View 3 Replies
View Related
Jun 12, 2006
Hi,
i have 4 tables, each consist of app. 10000000 rows.They have same columns (fTime[datetime] and bid[money]).What i wanna do is to collect all of datas into one of the tables, in ascending order by fTime.
PS i wanna do it as fast as possible as well
View 1 Replies
View Related
Nov 20, 2006
Hi,I've an application, lets call it simply "A", which creates in a Microsoft Sql Database two huge tables.Lets call them "table1" and "table2"It safes really much data into this tables.After application "A" has finished another application is executed which deletes this two tables.Then application "A" is started again and it will create this two tables again, but the amount of data becomes bigger.It can only proceed if the tables were deleted completely before and the database is empty. This is the procedure which I repeat very often, but everytime the amount of data becomes bigger (table1 and table2 becomes bigger).A couple if times it works fine, but once it seems data becomes too big and application "A" fails. Mostlikely because the data wasnt removed correctly / completely. This is my code of deleting the two tables, maybee there is something I have to change:</p><p> try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder("Server=mycomputerdbname;Integrated Security=SSPI;" + "Initial Catalog=testing"); builder["Server"] = "(local)dbname"; builder["Connect Timeout"] = 10; builder["Trusted_Connection"] = true; builder["Initial Catalog"] = ((ComponentConfiguration)this.componentConfig).Persistency.DatabaseName; SqlConnection sqlconnection = new SqlConnection(); sqlconnection.ConnectionString = builder.ConnectionString; sqlconnection.Open(); SqlCommand cmd1 = new SqlCommand("DROP TABLE table1"); // TO Do delete all tables SqlCommand cmd2 = new SqlCommand("DROP TABLE table2"); // TO Do delete all tables cmd1.Connection = sqlconnection; cmd2.Connection = sqlconnection; cmd1.ExecuteNonQuery(); Thread.Sleep(7000); cmd2.ExecuteNonQuery(); Thread.Sleep(7000); sqlconnection.Close(); Thread.Sleep(3000); } catch { }</p><p> </p><p> Thanks for help! mulata
View 6 Replies
View Related
May 31, 2007
Hi Good morning to all,
My day started with loading huge volume of data and my data flow task failed to do so.
My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.
I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.
the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.
I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?
Do we have any bulk load transformation in SSIS to load into DB2UDB ?
If so how do i get it installed?
Thanks in advance,
Suresh N
View 2 Replies
View Related
Mar 20, 2007
Hello All,
I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.
The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.
Thanks in advance,
Atul
View 3 Replies
View Related
Aug 19, 2007
Hi All!
I have an Issue.
I am calling a rdl file through the Url and i am passing the Format=Excel in the Url.
Eg. http://harinarayana/ReportServer/......&Format=Excel&...........
If the data is around more than some 20000 records, its not able to export and
a Error like "The Service is not available " is being displayed.
Does anyone have any solution for scenarios like this? It would be of great help to me.
Regards
Hari
View 1 Replies
View Related
Oct 26, 2001
I created some MDF/LDF in SQL 7.0 containing my DB. After that I did not deatch it from the SQL server 7.0 and physically formatted the H.Drive. SQL Server 7.0 data directory is still on D drive, I just formatted the C drive. Before that NT 4.0 and SQL server 7.0 were running but now I installed WINDOWS 2000 Server and SQL server 2000 on this machine.
Is there any way to access the previous DBs (MDF/LDF) and put it into the SQl server 2000.
Thanking you in anticipation
View 1 Replies
View Related
Oct 26, 2001
I created some MDF/LDF in SQL 7.0 containing my DB. After that I did not deatch it from the SQL server 7.0 and physically formatted the H.Drive. SQL Server 7.0 data directory is still on D drive, I just formatted the C drive. Before that NT 4.0 and SQL server 7.0 were running but now I installed WINDOWS 2000 Server and SQL server 2000 on this machine.
Is there any way to access the previous DBs (MDF/LDF) and put it into the SQl server 2000.
Thanking you in anticipation
View 1 Replies
View Related
Mar 16, 2004
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.
View 14 Replies
View Related
Aug 21, 2007
Hi all,
I've faced a problem with the below error when I load 1.5m data into oracle database.
The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers.
Please help. Thanks
View 19 Replies
View Related
Dec 3, 2006
Hi all,
In an approach of building an ETL tool, we are into a situation wherein, a table has to be loaded on an incremental basis. The first run all the records apporx 100 lacs has to be loaded. From the next run, only the records that got updated since the last run of the package or newly added are to be pulled from the source Database. One idea we had was to have two OLE DB Source components, in one get those records that got updated or was added newly, since we have upddate cols in the DB getting them is fairly simple, in the next OLEDB source load all the records form the Destination, pass it onto a Merge Join then have a Conditional Split down the piple line, and handle the updates cum insert.
Now the question is, how slow the show is gonna be ? Will there be a case that the Source DB returns records pretty fast and Merge Join fails in anticipation of all the records from the destination ?
What might be the ideal way to go about my scenario.. Please advice...
Thanks in advance.
View 13 Replies
View Related
Aug 30, 2005
Has anyone implemented split data for an application between two databases because the data size is extremely large? If so could you please point me to relevant information.In this split data scenario, a table will automatically carry over to another database whenever the size limit for the current database is reached. The challenge is here for the DAL (data access layer) to automatically look into the appropriate database when the next row of data is in another database. OR Perhaps there is another solution to this terasize data problem..Any help on this would be greatly appreciated.
View 8 Replies
View Related
Jun 14, 2006
Hi.
I am trying to put a hugh chunk of text into my database for example information to a particular product which has more than 2000 characters. I had saw this datatype "nvarchar(MAX)" in SQL Server 2005 and was wondering if i can use this to store my text.
Thanks
View 1 Replies
View Related
Oct 31, 2015
I have a database data file almost at 2tb maxing out a windows drive. Only 16gb left. Should I just add another data file on another Windows drive for growth? Or just move current huge data file to a new GPT drive? Or do both adding another data file and moving existing to its own new GPT drive?
Primary objective is to make do for now.
View 1 Replies
View Related
Jul 20, 2005
I have a table which contains approx 3,00,000 records. I need toimport this data into another table by executing a stored procedure.This stored procedure accepts the values from the table as params. Mycurrent solution is reading the table in cursor and executing thestored procedure. This takes tooooooo long. approx 5-6 hrs. I need tomake it better.Can anyone help ?Samir
View 2 Replies
View Related
Dec 30, 2007
Hi.
I am working on a serial tracking application using Sql Server 2005 and .Net. One of the requirments is to have an ad-hoc file export utility in which users can drag-n-drop fields from a set of tables and export the results to CSV. It all sounds ok and Sql Server Reporting Services' Report Builder seem to be just the right tool for it, but there is one problem :
The report size is big, about 7K - 8K pages and 4 - 5 columns wide; while rendering the report, IIS memory usage shoots up to about 2GB and remains at about 2GB.
Any idea if something can be done to mitigate this problem? Note that I dont need the HTML rendering at all. All I need is to have the CSV at the end of the day, while users are able to chose columns in an ad-hoc manner.
View 7 Replies
View Related
Jul 15, 2007
This problem may appear trivial to you guys but is troubling me since quite some time now! The problem is that I created a website using express studio happily and it worked flawlessly on the local host. Now I want to move it to a remote server and my host created a database name ASPNETDB for me on the server. The problem is that that ASPNETDB database is virtually empty and I want to copy all my tables,stored procedures etc etc(complete database) to the remote ASPNETDB database. How can I do that? In the management studio express edition there is no command available to copy paste all contents of the database to another database. Please help I am quite confused!
View 4 Replies
View Related
May 14, 2015
declare @error int, @rowcount int
select @rowcount = COUNT(1) FROM STG_BCDR;
while @rowcount > 0
begin
BEGIN TRAN Deletion
[code]....
Above code i try to delete records batch by batch to avoid table locking at BCDR table.total records in this BCDR table is 40,000 records. However I run the code at execution plan, the BCDR table still clustered index scan which means that the locking still happend.
If i change the delete top (5000)...... to delete top (5).... then thre is clustered index seek, which is good..The problem here is each time only delete top 5 records which is means it will realy take very long time to remove those data.
how to cater the situation inorder for me to delete those huge data without table locking happend. If table locking happend , then other user will not be able to access this table at the same time.
View 6 Replies
View Related
Aug 14, 2015
I need to copy data from warehouse tables to master tables of different SQL instances. Refresh need to done once in an hour. What is the best way to do this? SQL agent jobs or SSIS packages?
View 3 Replies
View Related
Jul 13, 2007
Please give me advise ครับ
View 1 Replies
View Related
Dec 9, 2007
Hi all,
I have a large Excel file with one large table which contains data, i've built a SQL Server DataBase and i want to fill it with the data from the excel file.
How can it be done?
Thanks, Michael.
View 1 Replies
View Related
May 27, 2007
Dear all,
i have problems with log .
the mssql write 4 G log so plz how can i Eliminate log huge size
View 1 Replies
View Related