Problem With Huge Amount Of Data

Nov 20, 2006

Hi,

I've an application, lets call it simply "A", which creates in a Microsoft Sql Database two huge tables.
Lets call them "table1" and "table2"
It safes really much data into this tables.
After application "A" has finished another application is executed which deletes this two tables.
Then application "A" is started again and it will create this two tables again, but the amount of data becomes bigger.
It can only proceed if the tables were deleted completely before and the database is empty.
 
This is the procedure which I repeat very often, but everytime the amount of data becomes bigger (table1 and table2 becomes bigger).
A couple if times it works fine, but once it seems data becomes too big and application "A" fails.
Mostlikely because the data wasnt removed correctly / completely.
 
  This is my code of deleting the two tables, maybee there is something I have to change:

</p><p>&nbsp;try
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SqlConnectionStringBuilder builder =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; new SqlConnectionStringBuilder("Server=mycomputerdbname;Integrated Security=SSPI;" +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "Initial Catalog=testing");

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; builder["Server"] = "(local)dbname"; &nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; builder["Connect Timeout"] = 10;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; builder["Trusted_Connection"] = true;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; builder["Initial Catalog"] = ((ComponentConfiguration)this.componentConfig).Persistency.DatabaseName;

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SqlConnection sqlconnection = new SqlConnection();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sqlconnection.ConnectionString = builder.ConnectionString;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sqlconnection.Open();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SqlCommand cmd1 = new SqlCommand("DROP TABLE table1");&nbsp; // TO Do delete all tables
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SqlCommand cmd2 = new SqlCommand("DROP TABLE table2");&nbsp;&nbsp; // TO Do delete all tables

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; cmd1.Connection = sqlconnection;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; cmd2.Connection = sqlconnection;

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; cmd1.ExecuteNonQuery();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Thread.Sleep(7000);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; cmd2.ExecuteNonQuery();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Thread.Sleep(7000);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sqlconnection.Close();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Thread.Sleep(3000);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; catch { }</p><p>&nbsp;</p><p>
 
Thanks for help!
 
mulata 

View 6 Replies


ADVERTISEMENT

Efficient Way To Transfer Huge Amount Of Records

Sep 28, 2006

Hi All,

I used a data flow task, and when trying to transfer data from a OLE DB Source (records ~ 75 lac) to a destination OLE DB Source, SSIS fails at the middle giving an error saying the Transaction log got filled, try again after clearing the same.

My query is what is the most efficient way to transfer say records more than 50 lac ensuring that it doesn't fail in the middle?

Thanks in Advance,

Mithun.

View 5 Replies View Related

How To Work With Huge Amount Of Records In A Table Using MSSQL Server 2000?

Dec 21, 2005

In
one of our forth coming projects, with ASP.Net/C#/MSSQL Server, We have
to deal with a Business table having about 15 millions of records. We
want to know, that which methodologies should we adopt, both regarding
front end and back end perspective, so the site could give optimised
performance. Also in place of a Dedicated Server, the Hosting Company
provides MSDE (that come with .net). Will this create any problem with
this project, that have such a huge table? Should we go for some
advanced database technique, such as, Clustering, Spliting Tables, etc.

Followings are the fields that the business table contains:

ID, Category ID (which comes from a Category table, each business is
under a category), BusinessName, SignupDate, Address1, Address2, Phone
Number,
Hours Of Operation, Years in Business, LicenseNumber, DiscountCoupon, Website

View 3 Replies View Related

SQL 2012 :: Log File Data Transfer Amount (MB) Versus Data File Transfer Amount (MB)

Mar 19, 2014

In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?

I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?

I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file?

View 6 Replies View Related

How To Optimize Data Import With Huge Volumes And Joins Across Data Sources Not All SQL Server Based?

Jun 7, 2006

I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios:
Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.

Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.

I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.

The result of this query would be exactly what I need to import.
How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.

In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server.
I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do.
Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!

Thanks in advance for your suggestions!

View 9 Replies View Related

Data Access :: Importing Huge Data From One Database To Another Daily

Jul 7, 2015

We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB. 

View 6 Replies View Related

Huge Deletes In A Huge Table

Apr 3, 2000

SQL 7 SP1 NT4 SP5

I have a TRANSACTION table with 150 million rows.

I have a USER table.

Each user has about 600 records in the TRANSACTION table.

The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.

The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.

I want to go through the USER table and delete all users who have not visited me in a while.

I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.

The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.

Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?

Thanks !!!

View 3 Replies View Related

Huge Data Insertion

Jun 12, 2006

Hi,

i have 4 tables, each consist of app. 10000000 rows.They have same columns (fTime[datetime] and bid[money]).What i wanna do is to collect all of datas into one of the tables, in ascending order by fTime.

PS i wanna do it as fast as possible as well

View 1 Replies View Related

AMOUNT OF DATA REPLICATION

Feb 10, 2007

Hy im PCV
I want to know how to calculate the amount of data(in MB) that is transfered from 1 server trought another
Puplisher--->Subscriber, using a merge replication. I know that the amount of data depends on the number of the rows and the scale of the colums. I only want to know how to calculate that amount of data. I am using Sql server 2000, and a OS windows XP profesional, thank you

PCV

View 2 Replies View Related

How To Separate Period Amount From YTD Amount

Mar 18, 2008

I'm creating a temporary table in a Sql 2005 stored procedure that contains the transaction amount entered in a period <= the period the user enters.
I can return that amount in my result set. But I also need to separate out by account the amounts just in the period = the period the user enters. There can be many entries or no entries in any period. I populate the temporary table this way:

SELECT
t.gl7accountsid,
a.accountnumber,
a.description,
a.category,
t.POSTDATE,
t.poststatus,
t.TRANSACTIONTYPE,
t.AMOUNT,
case
when t.transactiontype=2 then amount * (-1)
else amount
end as transamount,
t.ENCUMBRANCESTATUS,
t.gl7fiscalperiodsid

FROM
UrsinusCollege.dbo.gl7accounts a

join
ursinuscollege.dbo.gl7transactions t on
a.gl7accountsid=t.gl7accountsid

where
(t.gl7fiscalperiodsid >= 97
And
t.gl7fiscalperiodsid<=@FiscalPeriod_identifier)
And poststatus in (2,3)
and left(a.accountnumber,5) between '2-110' and '2-999'
And right(a.accountnumber,4) > 7149
And not(right(a.accountnumber,4)) in ('7171','7897')

order by a.accountnumber

Later I create a temporary table that contains budget information. I join these 2 temporary tables to produce my result set. But I don't know how to get the information for just one period. For example, if the user enters 99 as the FiscalPeriod_identifier, I need a separate field that contains only those amounts(if any) that were entered for each account in Period 99.

Can anyone help? It may be that I am not seeing the forest for the trees, but I can't figure it out.

Thanks very much.

Sue

View 6 Replies View Related

Loading Huge Volume Data

May 31, 2007

Hi Good morning to all,



My day started with loading huge volume of data and my data flow task failed to do so.



My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.



I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.



the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.



I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?



Do we have any bulk load transformation in SSIS to load into DB2UDB ?



If so how do i get it installed?





Thanks in advance,

Suresh N

View 2 Replies View Related

Tranferring Huge Data To Various Tables

Mar 13, 2007

Actually in my transformation i am transferring huge amount of data.

i have been using oledb command finally to dump my incoming data to respective tables.

For Example :

if you have two tables

table 1,table 2

in my incoming data i have a lookup and check for two unique columns with that of the unique columns in the table 1.if the record does not exsist i try inserting a record into table 2 and get the unique filed of the record and store that in particular column of table 1.

the data is very large an is this the better why or any suggesstions do let me know..



View 5 Replies View Related

Performance Issue With Huge Data

Mar 20, 2007

Hello All,

I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.

The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.



Thanks in advance,

Atul

View 3 Replies View Related

Cannot Export To Excel With Huge Data

Aug 19, 2007

Hi All!

I have an Issue.

I am calling a rdl file through the Url and i am passing the Format=Excel in the Url.

Eg. http://harinarayana/ReportServer/......&Format=Excel&...........

If the data is around more than some 20000 records, its not able to export and

a Error like "The Service is not available " is being displayed.

Does anyone have any solution for scenarios like this? It would be of great help to me.

Regards

Hari

View 1 Replies View Related

How To Query Data By Various Amount Of Filter Value ????

Jul 2, 2007

Generally, on any screen, we design filter screen by allowing user to identify range or one value to search.
But sometime in some screen, It will be more convenient for user if user can identify No matter how value to search.
 
For example
On screen which have information of people in any province.
So user would like to search it by identify no matter how value to search.
There are check box of any province on filter part which enable users choose it.
 
Hence, if sometime user choose (by clicking on checkbox) 3 provinces : LA, Michigan, WachingtonDC  to see description only 3 chosen province.
and sometime user choose (by clicking on checkbox) 2 provinces : LA, Michigan to see description only 2 chosen provinces.
 
Please give me any idea for create Stored Procedure or any tecnique to complete my idea....
 
 
Help me Pleaseeeee

 
 
 

View 1 Replies View Related

Inserting Large Amount Of Data

Jan 12, 2006

hello

i have just created a test database and now need to insert a large number of records into one of the tables, we were thinking of about 1 million records, has anyone got an sql script that i could use to create these records

cheers
john

View 6 Replies View Related

Make A Job That Updates Data By Row Amount

Aug 6, 2007

I need to make a job that will update up to 8000 rows with the list description of 'berkhold' to 'berknew' in SQL 2000. This is something that I have to do with several projects manually every day by doing the following 2 steps.

SELECT ListDescription, CRRecordID
FROM dbo_BerkleyGroupInventory
WHERE ListDescription ="BerkHold" AND CRCallDateTime<'1/1/2003' AND CRCallResultCode ='CC'
ORDER BY CRRecordID

I then scroll to the 8000th row and copy the CrrecordID and run the following query

UPDATE dbo.berkleygroupinventory
SET listdescription ='berknew'
WHERE ListDescription ='BerkHold' AND CRRecordID <=5968432 AND CRCallDateTime ='1/1/2003' AND CRCallResultCode='CC'

I'm sure there's an easier way to do this, but I'm very new to SQL and haven't figured it out yet

View 11 Replies View Related

SQL Import Of LArge Amount Of Data

Oct 7, 2005

This is a general question on the best way to import a large amount of datato a MS-SQL DB.I can have the data in just about any format I need to, I just don't knowhow to import the data. I some experience with SQL but not much.There is about 1500 to 2000 lines of data. I am looking for the best way toget this amount of data in on a monthly basis.Any help is greatly thanked!!Mike Charney

View 5 Replies View Related

Urgent! Please Help! Deleting Data From A Huge Table

Mar 16, 2004

I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.

View 14 Replies View Related

Select Data From Huge Fact Tables

Oct 12, 2006

Hi,

I have a situation where I have 4 tables:

1. 2 Dimensional tables(Parent), DIM1 with 50000 rows, and DIM2 with 1000 rows

2. Fact 1 with 50 columns, 25 Million rows and with FK to DIM1 and DIM2

3. Fact 2 with 40 columns, and 25 Million rows and with FK to DIM1 & DIM2 tables.

Actually the fact 1 and fact 2 have same related data but since our Analysis cube person wanted the fact table not to have more than 50 columns we divided the tables into 2, but they have the same compound key.

Above said, I have a situation where I have to select all the columns, in both fact tables, and do a group by. I wrote the query and ran "Analyze Query in the Database Engine Tuning Advisor" for it. It gave bunch of recomendations about the statistics and indexes which I created. When I executed the query the result came up in matter of seconds, which was good.

In the query I had a condition having MarketName='Bridgeview' and DateID = 344 (FK of today-1).

When I wanted the data for last 30 days I changed to DateID in ( > FK of today -32 and < FK of today), the query responded and worked fine.

But when I changed the query to get MarketName='Aurora' (other than I used when I ran Tuning Advisor), the result returned is empty set. When I removed the MarketName condition, it is supposed to return all markets' data, but it returns only Bridgeview data.

I know the data is in the table for all markets, since reports are rendered from these fact tables for all of these markets(also ran queries to check the fact table data).

I am unable to point out the reason why the query behaves like this. It responds to the date change, but not to the MarketName change.

I really appreciate if anyone can help me point out the problem.

Thanks,

Venkat

View 3 Replies View Related

Huge Volume Of Data Loading Issue

Aug 21, 2007

Hi all,

I've faced a problem with the below error when I load 1.5m data into oracle database.


The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers.

Please help. Thanks

View 19 Replies View Related

ETL Delta Pulling Huge Data.. Right Approach ?

Dec 3, 2006

Hi all,

In an approach of building an ETL tool, we are into a situation wherein, a table has to be loaded on an incremental basis. The first run all the records apporx 100 lacs has to be loaded. From the next run, only the records that got updated since the last run of the package or newly added are to be pulled from the source Database. One idea we had was to have two OLE DB Source components, in one get those records that got updated or was added newly, since we have upddate cols in the DB getting them is fairly simple, in the next OLEDB source load all the records form the Destination, pass it onto a Merge Join then have a Conditional Split down the piple line, and handle the updates cum insert.

Now the question is, how slow the show is gonna be ? Will there be a case that the Source DB returns records pretty fast and Merge Join fails in anticipation of all the records from the destination ?

What might be the ideal way to go about my scenario.. Please advice...

Thanks in advance.

View 13 Replies View Related

Deleting Large Amount Of Data From The Table......

May 18, 2001

I need to delete data from a particular table which has more than half a million records. The data needs to be deleted is more than 200,000 records from the table. What is the best way to delete the data from the table other than importing into a temporary table and performing the same operation?

Let me know if the strategy to be followed is okay.

1. Drop all the triggers
2. Drop all the indexes
3. Write a procedure with a loop setting ROWCOUNT to 1000 and delete the records. ( since if I try to delete all the rows it will give timeout error )
The above procedure will delete 1000 records for each batch inside the loop till it wipes out all the data for the specified condition.
4. Recreate Indexes and Triggers.

Please let me know if there are any other optimal solution.

Thanx,
Zombie

View 2 Replies View Related

How Do You Improve SQL Performance Over Large Amount Of Data?

Jul 23, 2005

Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]

View 5 Replies View Related

How To Return Large Amount Of Data In The XML Format

Jul 23, 2005

I have SQL 2000 and need to retrieve fairly large amout of data (~50.000 characters) in XML format and then insert it into the field ofthe text type.As 'FOR XML' can't be used with either local variables, INSERT INTO orSELECT INTO this makes "XML support" quite useless in many aspects.Can anyone please help me in solving this.Thanks a lot for your help and time.Pavel

View 1 Replies View Related

Implementing Multiple Databases Due To Huge Data Size?

Aug 30, 2005

Has anyone implemented split data for an application between two databases because the data size is extremely large? If so could you please point me to relevant information.In this split data scenario, a table will automatically carry over to another database whenever the size limit for the current database is reached. The challenge is here for the DAL (data access layer) to automatically look into the appropriate database when the next row of data is in another database. OR Perhaps there is another solution to this terasize data problem..Any help on  this would be greatly appreciated.

View 8 Replies View Related

Puting Huge Chunk Of Data Into Database? Workable?

Jun 14, 2006

Hi.
I am trying to put a hugh chunk of text into my database for example information to a particular product which has more than 2000 characters. I had saw this datatype "nvarchar(MAX)" in SQL Server 2005 and was wondering if i can use this to store my text.
Thanks

View 1 Replies View Related

Move Current Huge Data File To New GPT Drive?

Oct 31, 2015

I have a database data file almost at 2tb maxing out a windows drive. Only 16gb left. Should I just add another data file on another Windows drive for growth? Or just move current huge data file to a new GPT drive? Or do both adding another data file and moving existing to its own new GPT drive?

Primary objective is to make do for now.

View 1 Replies View Related

DTS/Async Stored Procedure/Import Huge Data

Jul 20, 2005

I have a table which contains approx 3,00,000 records. I need toimport this data into another table by executing a stored procedure.This stored procedure accepts the values from the table as params. Mycurrent solution is reading the table in cursor and executing thestored procedure. This takes tooooooo long. approx 5-6 hrs. I need tomake it better.Can anyone help ?Samir

View 2 Replies View Related

Reporting Services And Huge Data Extracts Causes IIS To Use A Lot Of Memory

Dec 30, 2007

Hi.

I am working on a serial tracking application using Sql Server 2005 and .Net. One of the requirments is to have an ad-hoc file export utility in which users can drag-n-drop fields from a set of tables and export the results to CSV. It all sounds ok and Sql Server Reporting Services' Report Builder seem to be just the right tool for it, but there is one problem :
The report size is big, about 7K - 8K pages and 4 - 5 columns wide; while rendering the report, IIS memory usage shoots up to about 2GB and remains at about 2GB.
Any idea if something can be done to mitigate this problem? Note that I dont need the HTML rendering at all. All I need is to have the CSV at the end of the day, while users are able to chose columns in an ad-hoc manner.

View 7 Replies View Related

Console Application For Retrieving A Large Amount Of Data

Sep 30, 2005

i need to retrieve a large amount of data  from the sql server database and make changes to one field and put the data back using a console application. how do i do it?

View 3 Replies View Related

Reducing Time On Retrieving Large Amount Of Data

May 5, 2003

Hi,
My application needs to retrieve data from a table which has more than 15 lakh records. The records keep increasing in thousands every 15 days.
Is there anyway i can reduce the time to retrieve? basically i have a select statement with a few conditions and a clause for the id's of these records.

View 2 Replies View Related

Problem Running Package With 'larger' Amount Of Data

Jun 9, 2006

Dear,

I created a package getting data from files and database sources, doing some transformations, retrieving dimension id's and then inserting it into a fact table.

Running this package with a limited amount of data (about a couple of 100.000 records) does not result in any errors and everything goes fine.

Now running the same package (still in debug mode) with more data (about 2.000.000 rows) doesn't result in any errors as well, but it just stops running. In fact, it doesn't really stop, but it doesn't continue as well. If I've only been waiting for some minutes or hours, I could think it's still processing, but I waited for about a day and it still is 'processing' the same step.

Any ideas on how to dig further into this in order to find the problem? Or is this a known problem?



Thanks for your ideas,

Jievie

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved