Handling High Volume Data

Apr 18, 2007

Hi Guys,

I am facing problems with concurrent access in SQL Server 2000,The scenario is that the DB contains one huge de-normalized table containing 40 million records.

The application frequently queries this table to populate other derived tables,the sql queries take a long time to return results.

So if one query is in execution the other user's query goes into a
wait mode.Please suggest how I can better this.

Or do I need to upgrade to 2005.

Regards,
Prashant

View 2 Replies


ADVERTISEMENT

Solution Design For High Volume Of Data

Apr 18, 2006

Hi,

I have been asked to design a solution for a client of mine who basically requires the daily analysis and reconciliation of the differences between 2 extremely large text files.

The files are not in an identical format but are both in some form of delimited format (one is CSV, the other is a little more complex). For the sake of this question, let's assume that I can effectively import each file into an MS SQL table.

Each file will have in excess of 100,000 rows each day (new data for each day).

Whilst I know that MS SQL does easily have the capacity to store the data, is there a recommended way to tackle the potential problems (I imagine that performance is important... they will be running the report every day)

Or is building the solution as simple as importing the data into 2 tables, and then querying the differences and outputting as a report using Crystal?

Any suggestions appreciated.

Thanks

Rael

View 10 Replies View Related

High Volume I/O!!!

Feb 28, 2008

I have a summary table with a 9 field composite primary key. Every 10 minutes, my system generates 2 files of 500,000 to 750,000 rows to be summarized into this table. I first Bulk insert those into a temp table, and then trigger an inner-join update query to do the updates, followed by a left-outer join to do the inserts. As the day goes on, millions of rows in my summary table, this process is too slow. Any ideas about causes/solutions???

RLiss

View 2 Replies View Related

High Volume DB Performance Problems

Mar 19, 2008

Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general

View 1 Replies View Related

MSDE For Large Volume And High No. Of Users?

Sep 21, 2005

Hi,We need to use a free database for a project because of tight budget.Is MSDE ok for handling large volume of data and 70 - 80 users?My understanding is that MSDE is optimized for 5 concurrent users.Is MySQL better than MSDE?Thanks,Ben

View 1 Replies View Related

High Volume Database Query Optimization

Dec 10, 2007

Hello every body

i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!

View 5 Replies View Related

Performance Tuning On High Volume Server

Jan 4, 2007

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?



I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

View 5 Replies View Related

SQL Server 2008 :: High Volume Reads And Writes?

Jul 6, 2015

We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.

View 2 Replies View Related

Loading Huge Volume Data

May 31, 2007

Hi Good morning to all,



My day started with loading huge volume of data and my data flow task failed to do so.



My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.



I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.



the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.



I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?



Do we have any bulk load transformation in SSIS to load into DB2UDB ?



If so how do i get it installed?





Thanks in advance,

Suresh N

View 2 Replies View Related

Transaction/Data Volume Vs SQL Edition

Nov 15, 2007

I am in the process of choosing between either SQL Workgroup or Standard Edition. I see the differences in features on the comparison table, but do not see any references to the differing capabilities in handling transactions.

Is there any differences between Workgroup and Standard in terms of handling transaction/data capabilities? i.e. Does Standard have the superior capability in handling X times more TPMs than Workgroup?

If not, am I correct to assume that this is totally determined by hardware configuration (# of CPUs, processor speed, HD speed, RAM) ?

If the data volume / transactions handling is solely determined by hardware configuration, and I know the # of transactions and amount of R/W per second, .......where would be a good reference to know what kind of hardware configuration I need (ideally, once I know the hardware configuration, I guess I would be able to determine I need Workgroup or Standard)

Thanks in advance,
benbry

View 3 Replies View Related

Huge Volume Of Data Loading Issue

Aug 21, 2007

Hi all,

I've faced a problem with the below error when I load 1.5m data into oracle database.


The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers.

Please help. Thanks

View 19 Replies View Related

T-SQL (SS2K8) :: Measuring Volume Of Data Created Temporarily To Replace Usage Of Physical Tables In Query

Sep 12, 2014

How I can measure the volume of data created temporarily to replace usage of physical tables in an SQL query.

View 1 Replies View Related

High Availability To High Protection Without Reconfiguring Mirroring

Apr 23, 2007

Hi,

Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.

If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?

Thanks,
J.

View 3 Replies View Related

High Safety Changed To High Performance After Fail Over ?

Mar 6, 2008

Hi There

I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.


If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.


So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?

Thanx

View 4 Replies View Related

Tranform Large Volume Of Data. Sholuld It Be Processed Chunk By Chunk?

Dec 16, 2007



Hi,
I have to transform about 60 millions of data and it runs so slow that it never finishes in my testing. Should I have to process it chunk by chunk? Or is there any other techniques I can use (I am using data flow task). Thanks for advice.


View 12 Replies View Related

Handling Bad Data?

Jan 2, 2008

I have a peculiar requirement. I need the unmatched data to passed on a bad data table as well as tied to a dimension "Others". Please guide how to suffice these two condition simultaneously

View 1 Replies View Related

Handling Bad Data

Jul 25, 2007

I have an SSIS package that takes in a flat file and pushes the data into a table. However, once in a while the file will have some bad data in it (for example, this particular time I have too many delimiters on one line). I want the package to redirect the row to an error table and keep going with the processing for the rest of the data. To do this, I have hooked up an error output from the flat file source to an OLE DB destination and assigned all the rows to Redirect Row on Error in the Error Output section of the flat file source. Unfortunately, it does not work! The flat file receives an error and stops. Is this because something different has to happen when it is a problem with the whole row? Any help would be appreciated, thanks.

View 6 Replies View Related

Slow Running Query With A High Lob Logical Reads, Lob Data

Jun 1, 2007

we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.

profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.

3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.

is there anyway to improve the performance without changing data types to varchar(max)?


select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)

View 3 Replies View Related

SQL 2012 :: TEMPDB High Avg Write Wait Time On Data Files

Apr 15, 2014

I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that

1. Data files = number of physical cores
2. Data files and logfiles are on separate partitions away from the other databases.
3. Tempdb is presized and no incremental file increases look like they are happening with frequency.

We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.

View 3 Replies View Related

MS SQL Server Handling Unicode Data

Feb 21, 2007

Hi,

I want to fetch Unicode data from MS SQL server 2005.I have created one user with russian language characters.

Following is my code snippet:
------------------------------------
wchar_twchar1[55];
char VALUE[255];

while (rc != SQL_NO_DATA)
{
rc = SQLFetch(hstmt);
if (rc != SQL_NO_DATA)
memset(VALUE,0,255);
int length=wcstombs( NULL,(wchar_t *)&wchar1, wcslen(wchar1));
char* strChar = (char *) malloc((length +1 )* sizeof(char));
if (strChar != NULL)
{
count++;
memset(strChar,0, length + 1 );
int i =wcstombs(strChar, wchar1, wcslen(wchar1) );
printf( "i is %d",i);
}
printf( "The length is %d",length);
wprintf(L" <%s>", wchar1);
printf(" simple char <%s>", strChar);
wprintf(L" simple char <%S>", wchar1);
}


-----------------------------------

CAn anybody help me ?
I think problem is in fetching of name.
Does anybody have working code for fetching unicode data from SQL server 2005.

Also I want query that will tell the locale setting of the server and also client?


Thanks-----

View 4 Replies View Related

Handling Multiple Data Records With Sp

May 2, 2007

I have a need to read a table and extract records that matches a criteria (dough!, not that simple). Then I put the record in a text message and send them out via email. Works OK, but the problem is when I have multiple records for one entry. Since SQL doesn't have arrays "per say", I need to load the records on a "temporary array" then assembly the message at the end to include the records. Any suggestions how to "emulate" an array on a stored procedure (SQL), not sure what direction to follow. thanks

View 2 Replies View Related

Handling Encrypted Data In SSIS

Mar 11, 2008



Hi, I have source data in encrypted format. How should i handle it in SSIS?
I have found no information for such situation.

Anybody have any idea about it.

Bhakti

View 2 Replies View Related

Handling Log File During Data Import

Oct 3, 2007



Hi,
i have a package that uses a ForEach loop component to import flat files, and uses an OLE DB Destination component to insert the data into some staging tables (using table fast load with a max insert commit of 1000 rows), the biggest individual table import would be circa 5000 rows. At the end of each file import a stored proc is called to transfer the data into production tables, then the next file is imported.

Periodically (when importing more than 5 or 6 files) the process fails with the error message:

The transaction log for database 'blah' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

This occurs when committing the data to staging. I do not start any transactions during this process, and the staging tables are cleaned out by truncating them, so i'm not sure exactly what is causing the log file to fill up (i'm not a DBA).

I know this is not specifically a SSIS problem, but can anyone give me a suggestion about the best way to handle the log file during an SSIS import? Should i execute a DBCC SHRINKFILE before each flat file is imported? Is there some other approach i should be taking to either insert the data or to move it from staging to production?

Thanks!






View 3 Replies View Related

T-sql Data Type Conversion Error Handling

Jun 21, 2001

I'm trying to convert a column from a varchar(14) to float, some of the rows contain some non-numeric characters. I've tried to write a cursor to step through the records, run cast(col1 as float) on each record, then use if @@error <> 0 to capture an error but it doesn't work. It steps through fine but when the cast command fails it ends the execution of the script with "Error converting data type varchar to float."

How can i capture this error without having the script fail? Is there another way to get this done?

Thanks,

Jim

View 1 Replies View Related

Flexible Back-end Data Handling In .net + Sql Project

Nov 23, 2005

Hi guys,Got a problem now :( please help...now we got a project handling records saved in a table in a sql2000(will upgraded to 2005 soon) server. every month around a millionrecords will be inserted.now user raised a request, that is, once criterios are matched, theproject should do some backend handle, for example, ifSELECT colID, fieldC, fieldD FROM dataTable WHERE fieldA ='fieldA_valueB'returns some recordset, for each @fieldC, @fieldD we shall do someback-end trick, maybeUPDATE dataTable SET fieldE = 'fieldE_valueF' WHERE fieldC = @fieldCAND fieldD = @fieldDlet's say such a rule is named as rule01. Hope I'm expressing theproblem clearly?my questoin is, shall we do this in Database side, using triggers, orby informing our .net project to do it?1. since the records are coming around 1 millison per month, how can wehandling the performance issue?2. now the rules are still somewhat simple, seems at least we could doit by EXEC or SP_EXECUTESQL. but rules may turn quite complex, forexample, audit log, or more complex issues, shall we do it in .netprogram? but how can a trigger in SQL database informing a .netprogram, or webservice, or windows service? by executeing an executableconsole program?3. user may raise more and more rules, how could we provied a flexiblesolution? i mean we're trying to build it less hard-coded. seems in sqldatabase EXEC or SP_EXECUTESQL are still somewhat flexible, while in..net to do something like eval() in javascript or EXEC in sql server isa little bit troublesome. but, put all these bussiness logic in storedprocedure sounds a little bit weired.guys, hope i have made myself clear. any suggestion? Thanks very mcuhyours,athos.

View 7 Replies View Related

Handling Large Data Values In Textboxes

Mar 28, 2007

Hi, Ive got a report using a List item that is vertically displaying the columns from a table. The problem I run into, is that some of the fields in this table contain large blocks of text where the users have entered comments and such.

I am using Textboxes to display this data.

So my report will look something like
-----
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3


<white space>



<page break>

Field value 3 ---> this is a big block of text
Field label 4 Field value 4
etc
------
It appears as though the report attempts to keep the contents of each textbox together even if that means breaking onto an entirely new page to do this. I would prefer for the data to flow more natrually instead where the page breaks in the middle of the data being displayed should it be too large to fit on the page it started on.

-----
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3 Field value 3 --- As much as can fit on this page

<page break>

Field value 3 ---> remaining data that broke over the page
Field label 4 Field value 4
etc
------

Any suggestions would be apprecaited.

View 3 Replies View Related

Flat Data Length Handling In IS Package

Apr 11, 2008



hello All,

I have a flat file which has a column (length of 24) . Whenever I have that column more the 24 char length, my package failed. I would like to know if there is any way if I can continue my package in this situation and send my bad row to the output table?

Any help will be appreciated.

Thanks

View 2 Replies View Related

Errors In The High-level Relational Engine. A Connection Could Not Be Made To The Data Source With The DataSourceID Of '

Feb 7, 2006

When I deploy the cube which is sitting on my PC (local) the following 4 errors come up:

Error 1 The datasource , 'AdventureWorksDW', contains an ImpersonationMode that that is not supported for processing operations. 0 0
Error 2 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'AdventureWorksDW'. 0 0
Error 3 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Customer', Name of 'Customer' was being processed. 0 0
Error 4 Errors in the OLAP storage engine: An error occurred while the 'Customer Alternate Key' attribute of the 'Customer' dimension from the 'Analysis Services Tutorial' database was being processed. 0 0

View 25 Replies View Related

Data Flow Inside For Each Loop - Error Handling

Jun 23, 2006

Hopefully this is an easy question:

Inside of a for each loop (looping through an ADO record set of objects to import) I have a data flow task (along with many other processes).... if the dataflow task suceeds I log success in a table. If it errors I want it to fail the dataflow task (which will fire off my Event Handler for that data flow and log the failure, email etc) BUT I want it to continue the loop - I can't seem to figure out how to get the data flow object not to fail the whole loop. If any other objects inside the foreach, other than the data flow, fail I would like the whole loop to fail. Also if possible (but not a requirement) I would like it to have a threshold where if the data flow fails X variable times it will fail the package.

I am having difficulty how to not fail the loop when the import data fails..... just looking for a simple "on error next" type logic for that specific object in the foreach but not the rest. Thanks in advance for the help/advice.

View 4 Replies View Related

Error Handling In OLEDB Source In Data Flow

Sep 11, 2007

I am trying to execute a SP like below in OLEDB source in data flow... and this statement include the insert stament ( row by row transaction).. I would like to creat an error hadling logic so that if the trasaction fail to insert the row then ignore that particular row then, move to the next row without stopping the whole process.. how can i do this?


exec usp_Inert_Registration_Episodes_Assessments

@Unique_ID=?,

@Gender_Cd=?,

@Birth_Date=?,

@Race_Ind=?,

@Ethnicity_Cd=?,

@Registration_Dt=? ,
--
--@Object_Key

View 16 Replies View Related

T-SQL (SS2K8) :: Error Handling - String Or Binary Data Would Be Truncated

May 16, 2014

We have a wrapper procedure that i am looking at replacing shortly. Basic Layout is something like like

Create proc SP_Wrapper
as
SP_LogProc 'proc1','Start'
Exec proc1
SP_LogProc 'proc1','End'
SP_LogProc 'proc2','Start'
Exec proc2
SP_LogProc 'proc2','End'

SP_LogProc 'proc3','Start'
Exec proc3
SP_LogProc 'proc3','End'
....

You get the idea. So logproc just logs that the procedure has started or stopped to a table so we can monitor it easily.

The procedure is then run by an agent job every morning. This morning we had a little bit of an odd one. In the example above we effectively got to 'proc3' being started (as it was logged). However there was then an error of String or binary data would be truncated (severity 16 i believe). However when proc3 was then manually run it worked (the data was unchanged). Then going back proc2 works fine, and its actually proc1 that has the error.

I have looked through the procs in question (and the wrapper) and cant find any error handling that is relevent (there one try and catch block completely separate at the end of the wrapper procedure for a small routine).

View 5 Replies View Related

SSIS - Handling Recursive XML Elements In Data Flow Task

May 22, 2008

Hi All,
I have a requirement here to import data from XML file to SQL Database. The XML schema contains of various elements and one of the element is recursive ie. Parameter node contains parameter node within it and it can have n number of iterations. I have given the sample schema below:


<xs:element minOccurs="0" name="Parameter">

<xs:complexType>

<xs: sequence>

<xs:element minOccurs="0" name="ID" type="xs: string" />

<xs:element minOccurs="0" name="Description" type="xs: string" />

<xs:element minOccurs="0" name="Type" type="xs: string" />


<xs:element minOccurs="0" name="Parameter">
<xs:complexType>

<xs: sequence>

<xs:element minOccurs="0" name="ID" type="xs: string" />

<xs:element minOccurs="0" name="Description" type="xs: string" />

<xs:element minOccurs="0" name="Type" type="xs: string" />



<xs:element minOccurs="0" name="Parameter">
<xs:complexType>

<xs: sequence>

<xs:element minOccurs="0" name="ID" type="xs: string" />

<xs:element minOccurs="0" name="Description" type="xs: string" />

<xs:element minOccurs="0" name="Type" type="xs: string" />

</xs: sequence>

</xs:complexType>

</xs:element>


<xs:element minOccurs="0" name="Parameter">
...............

</xs: sequence>

</xs:complexType>

</xs:element>

</xs: sequence>

</xs:complexType>

</xs:element>

But all the nodes contain the data which has to be imported to a single table dbo.Parameters. I cannot use Union ALL since i dont know how many iterations I will have in the file. Is there any way to do this operation in Data Flow Task using XML Source? Can anyone help me on this?

Thanks,
Dhileep

View 1 Replies View Related

Integration Services :: Handling Empty Text File Load Into Table Through SSIS Data Flow?

Jun 16, 2015

We have created SSIS package to load a text file into a table. Source system shares 10 text files and recently they stopped generating data for one of the text file (comping empty), after few months they will start generating the data for the empty file batch processing. 

The Issue here is Data Flow task is getting failed while loading empty text file into table. How to handle this empty file load issue in SSIS package.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved