SQL Server Integration Services - Stop Large Number From Being Converted To Exponential

Sep 23, 2006

I am importing a text file with a column (serial numbers) with alphanumeric data, some mixed and some only numeric. The very large values that are all numeric are being converted to exponential when I run it thru an import package in SQL Server Integration Services (2005)

Ex. 4110041233214321 --> 4110040000000000 (displays as 4.11E+15)

In the past I dealt with this by importing the text file into Excel and changing the format of the column to number. This works even when many of the values contain alpha characters. I am not sure how to accomplish this same thing without going thru Excel. If you have any ideas on this I would be happy to hear from you.

I am importing the text file into a sql table.

View 1 Replies


ADVERTISEMENT

Integration Services :: How To Stop At A Blank Row In Excel

Oct 9, 2015

I am importing a .xlsx into at SQL Server 2014 table.  I would like to import each row with an ID and stop at the notes that are listed at the bottom.  Please see attached image. How do I tell my SSIS Data Task to stop importing at the first blank row?I never know how many rows of data I may have so I can not use a fixed row number to stop at.

View 4 Replies View Related

Integration Services :: Tell A Package To Stop Executing On Failure

Sep 8, 2015

I am using SQL Server 2012 SP1. I have built an SSIS package that imports flat file data from various files to SQL Server.  I have got it to do everything I want it to do when things are going well, and am now on what I want it to do when it encounters a failure executing specific tasks and containers.  For example, I have a Foreach Loop container that executes a dedicated stored procedure for each csv file in the target folder. If any of the store procedures fail to run for any reason I want to carry out certain actions.

For the most part I think I will be fine using the Event Handlers.  What I can't seem to find is how to tell the package to stop executing on a Failure event after carrying out the actions defined by the relevant Event Handler. Or, perhaps it isn't necessary as that would be the default behaviour on a failure?

View 2 Replies View Related

Integration Services :: Oracle Sequence Number Within SSIS

Jul 22, 2015

For each row coming out of my data source, I would like to add the result of an Oracle query to it (select sequence.nextval from dual).

I need to acquire the sequence number before all my processes in my data flow, so I don't want to have a trigger in Oracle call nextval and do it automatically for me.

I also think getting the value of nextval inside of a variable at the beginning of the process would not work because it only increments the value once.

View 2 Replies View Related

Integration Services :: Auto Number Generator In SSIS

Aug 11, 2015

In SSIS, I created a flow and within that flow i need to create an automatic number generator. The flow is simple: Ole db source part, a multicast part and a flat file destination and a ole db destination. In the Ole db source there is no field that is the starting point for this generator. This automatic nr should start with 10000 and each time a row is found it should be increased by 1.

First question: is it possible to create a column in SSIS flow which is the starting point for the generator (in this case 10000) or should it be created in the source table?

Second question: where in SSIS and with what component can i create such a generator??

View 5 Replies View Related

Integration Services :: Leading Zeros With Employee Number Column

Jul 20, 2015

I need leading zero's with EmployeeNum column(source employeenum is datatype : float).

REPLICATE('0',8-LEN(RTRIM(a.[RecordID and EmpNum]))) + RTRIM(a.[RecordID and EmpNum]) AS [Employee Num]

I have done above query it's populated correctly in database table.

Ex:00010198

When ever we are excuting the package is not receiving the leading zero's to CSV file. Source it self truncating this leading zero's.

Source:OLEDB ,Destination : Flatfile(CSV)

View 2 Replies View Related

Integration Services :: Export To Excel Dynamic Number Of Columns

Sep 11, 2015

We have a requirement to produce adhoc Excel reports with a standardized header page with a disclaimer attached. We want to be able to feed in a SQL Statement, or a table with the resultset from a SQL Statement and have SSIS populate an existing blank Excel workbook, which the disclaimer attached. The use of xp_cmdshell is not an option.I've spent a lot of time looking for solutions on the web and it seems though its not possible - although many articles are 3-5 years old. Before I throw in the towel, I just wanted to get feedback from this group if it still is not possible in the latest versions of SQLServer and SSIS, or to ask if there are any other 3rd party solutions that can do this today.

View 5 Replies View Related

Integration Services :: Creating A Column With Total Number Of Records In Dataset For Each Row?

Aug 17, 2015

I have a transformation where final result set give me 25 rows of data. Now before I put into destination table, I need to add another column which will show how many total records we have. Like.

My dataset:

A  20 abc
B 24 mnp
c 44 apq

Now I need to add another column within my transformation before I store the result set to destination like this:

A 20 abc 3
b 24 mnp 3
c 44 apq 3

Here. new column gives count of total rows in our dataset which was 3.

How can I achieve this? Can I use derive column to this?

View 6 Replies View Related

Integration Services :: Iterating Over Object In SSIS With Fix Number Of Results Each Time

Aug 11, 2015

I've got this issue with a query in SSIS. From a table in SQL Server I'm getting over 25000 different identifiers. These identifier are associated to many values in a table in  one Oracle Database. This is the schema that I have implemented for doing this.

The problem is that some days the identifiers can be over 45000, and at this point perform a loop for every one is not the best solution (It can take to much time to get the result). Previously I have performed another query where from the SQL statement.

I am creating and sending a unique row with all the values concatenated and then I have recover this unique string from an object and use it to create the query in the ODBC Source that invoke the table in Oracle: something like this:
'Select * from Oracle_table' + @string_values

with @string_values = 'where value in (........)'. It works good because the number of values is small enough to be used, like 250. But in this case I can not use this approach because the number is really big and obviously the DBA of Oracle is going to cancel the query.

So I wonder, how can I iterate over the object getting only a few number of values everytime, something like 300 or maximum 500, to avoid the cancellation of the query but at the same time doing the minimum number of loops.

View 5 Replies View Related

Integration Services :: Load Data From Flat Files Having Variable Number Of Columns

Jun 23, 2015

I want to load flat files into a single table. But the flat files can have variable number of columns upto a maximum of 10 columns. The table in my database has 10 columns in it. So in case if I load a flat file having 6 columns then rest of the columns in the table will have nulls. I don't want to use script task for this  as I am not good in writing C#code.

View 5 Replies View Related

Integration Services :: Commit N Number Of Records (SSIS Versus Stored Procedure)

May 8, 2015

I have a stored proc that is returning the results I need for output to .txt file.

Is there a way in SSIS to commit 50K (or whatever number) row batches at a time or should I just handle this in the stored proc?

select * into #TempTable
from SomeTable
[WHILE LOOP] --throttle commit batches of 50K rowcount
select *
from #TempTable
[END LOOP]
drop table #TempTable

But If I'm doing this in SSIS, I can't drop the #temp table otherwise I have nothing to output right?

View 6 Replies View Related

Integration Services :: Perform Lookup On Large Dataset Based On A Small Dataset

Oct 1, 2015

I have a small number of rows in a dataset, Table 1.  There is a CLOB on a large dataset, Table 2.  They join on a PK.  I would like to retrieve this CLOB and add it to the data flow for Table1.  In short I want to emulate the following:

Table 1:  Small table without CLOB, 10 rows. 
Table 2: Large table with CLOB, 10,000,000 rows

select CLOB
from table2
where pk = (select pk from table1)

I want this to return the CLOBs for the small number of rows in Table 1.  The PK is indexed obviously so it should be a fast look up.

Table 1 and Table 2 live on different Oracle databases.  How do I perform this operation efficiently in SSIS?  It seems the Lookup and Merge Join wont do this.

View 2 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

SQL Server 2005 Slows Down After A Large Number Of Queries

Jul 4, 2007

Hi,

We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.

I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.

I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.

The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?

If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)

Any advice will be greatly appreciated.

Thanks,
Joe

View 3 Replies View Related

SQL SERVER 2005 Database Mirroring For Large Number Of Databases

May 30, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

this are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

View 1 Replies View Related

SQL SERVER 2005 DATABASE MIRRORING For Large Number Of Databases

Jun 1, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

these are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....

"There is insufficient system Memory to run this query"

WHY?

View 19 Replies View Related

SQL Server 2005 Full Text Performance With Large Number Of Records

Dec 12, 2007

Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
 
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
 
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
 
Thanks in advance

View 1 Replies View Related

SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.

View 0 Replies View Related

SQL Server Admin 2014 :: MDW Data Collector - Large Number Of SPIDs

Nov 6, 2015

I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.

On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.

Is this normal behaviour? I've seen more blocking as a result of this.

Is there any way to reduce the number of SPIDs generated?

View 0 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

Integration Services :: Import Varying Number Of Tables Each Time From One Database To Different Database

Sep 9, 2015

I am new to SSIS. I have been struggling with this for the past one week. I have a weird task. I need to import several tables from one database to a different server with a new database name. We need to do this at the end of every year. The main problem here is that the number of tables varies every year. You may not have all the tables as last year or may have more tables. So I need to create a dynamic task that takes care of this every year without changing the package.

I have performed the following tasks **

1. Create a new dynamic database. ( I have used Execute SQL Task to do this) 2. Copy all the table structures ( I have used Execute SQL Task to do this)

3. Import Data. This is the main problem. I was trying to create a dynamic connection string with variables as suggested in several forums but I finally came to know that this cannot be done if the table structures are different as the metadata cannot be refreshed at runtime.

4. The final step to create a process to validate the data (the count from each table for both source and destination. I think this can be done with Sql task.

What is the best method to do this? My DBA does not like “Transfer SQL Objects Task” or “transfer Database Task”. I would like to create this as a dynamic process.

View 5 Replies View Related

Reporting Services :: Every Other PDF Converted Page Is Blank

May 8, 2015

I have created a report and convert it to PDF. Every other page on pdf is blank. I have checked the margin and could not fix on the report.

View 6 Replies View Related

Security Rights Needed To Start And Stop Services From SQL Server Management Studio

Jun 21, 2007

Hello everyone.

I have a question about granting enough rights to allow a non admin user to start and stop a sql server service via the SQL Server Management Studio by right clicking on the server node.



I have changed the acl's of the SQL Server service security and gave the user rights to start and stop the service. This does allow them to connect remotly to the server using computer manager and they successifully start and stop the service. But, in SQL Server Management Studio the option still does not show up, unless he is a admin of the server.



Does anyone know what other security settings I need to address for the start and stop to show up when I right click on the server node?

Thanks for any help

View 3 Replies View Related

Reporting Services :: Footer For A Report That Is Converted To Word Gets Cut Off

Nov 24, 2015

In Visual Studio, when previewing a report with a footer that has two rows of text, the footer appears to display correctly. When I export to Word, the footer for the first page looks as if it has shifted down slightly and the second line is missing. Subsequent pages show the footer correctly.

View 2 Replies View Related

Running A Large Number Of SSIS Packages (with Dtexec Utility) In Parallel From A SQL Server Agent Job Produces Errors

Jan 11, 2008

Hi,

I have stumbled on a problem with running a large number of SSIS packages in parallel, using the dtexec? command from inside an SQL Server job.

Ive described the environment, the goal and the problem below. Sorry if its a bit too long, but I tried to be as clear as possible.

The environment:
Windows Server 2003 Enterprise x64 Edition, SQL Server 2005 32bit Enterprise Edition SP2.

The goal:
We have a large number of text files that were loading into a staging area of a data warehouse (based on SQL Server 2k5, as said above).

We have one main? SSIS package that takes a list of files to load from an XML file, loops through that list and for each file in the list starts an SSIS package by using dtexec? command. The command is started asynchronously by using system.diagnostics.process.start() method. This means that a large number of SSIS packages are started in parallel. These packages perform the actual loading (with BULK insert).

I have successfully run the loading process from the command prompt (using the dtexec command to start the main package) a number of times.

In order to move the loading to a production environment and schedule it, we have set up an SQL Server Agent job. Weve created a proxy user with the necessary rights (the same user that runs the job from command prompt), created an the SQL Agent job (there is one step of type cmdexec? that runs the main? SSIS package with the dtexec? command).

If the input XML file for the main package contains a small number of files (for example 10), the SQL Server Agent job works fine the SSIS packages are started in parallel and they finish work successfully.

The problem:
When the number of the concurrently started SSIS packages gets too big, the packages start to fail. When a large number of SSIS package executions are already taking place, the new dtexec commands fail after 0 seconds of work with an empty error message.

Please bear in mind that the same loading still works perfectly from command prompt on the same server with the same user. It only fails when run from the SQL Agent Job.

Ive tried to understand the limit, when do the packages start to fail, and I believe that the threshold is 80 parallel executions (I understand that it might not be desirable to start so many SSIS packages at once, but Id like to do it despite this).

Additional information:

The dtexec utility provides an error message where the package variables are shown and the fact that the package ran 0 seconds, but the Message? is empty (Message: ).
Turning the logging on in all the packages does not provide an error message either, just a lot of run-time information.
The try-catch block around the process.start() script in the main packages script task also does not reveal any errors.
Ive increased the max worker threads? number for the cmdexec subsystem in the msdb.dbo.syssubsystems table to a safely high number and restarted the SQL Server, but this had no effect either.

The request:

Can anyone give ideas what could be the cause of the problem?
If you have any ideas about how to further debug the problem, they are also very welcome.
Thanks in advance!

Eero Ringmäe

View 2 Replies View Related

SQL Server 2012 :: Odd Cast Of Exponential To Float?

Mar 23, 2015

For whatever reason i'm unable to cast anything more thtan e-4 to a float which makes no sence. Am i missing something?

select cast( '1.550e-6' as float)
?????????
returns 1.55E-06
????????
select cast( '1.550e-5' as float)
?????????
returns 1.55E-05
????????
select cast( '1.550e-4' as float)
returns 0.000155
select cast( '1.550e10' as float)
returns 15500000000

View 9 Replies View Related

Reporting Services :: Range Bar Chart - X Axis Dates Label Format Need To Be Converted To Quarter

Sep 4, 2014

I have created range bar chart and I am not able to achieve the following tasks.

1. Change X-axis Label Format to Quarter:

I have x-axis with dates and y axis of project groups. I have changed x-axis interval type = month and interval=3.
   
Set the Maximum =  Max(ProjectEndDate) and Minimum = Min(ProjectStartDate).

Now my chart showing 3 months x-axis interval dates in mm/dd/yyyy format. I want to change this interval date format to Quarter. The problem is LabelsFormat property is not recognize  the "=Q or q or quarter" and also not accepting the expressions. How can I achieve this?

2. Placing series side by side when it is not overlapping

I want to place the same group series side by side only when the previous project end date is less than next project start date, otherwise place the next project to next row. How can I achieve this?

View 2 Replies View Related

SQL Server 2012 :: Float Value Converting To Exponential While Inserting To Varchar Field?

Aug 6, 2015

Am converting varchar field to float and summing using group by and next inserting to varchar field(table).

while inserting float value it is converting to exponential ex:1.04177e+006 but if i execute only select statment actual float value will get display ex:1041765.726

My question is why it is converting while inserting ? and how to avoid it.

select query : SUM(CONVERT(float,(rtrim(REPLACE(REPLACE( column1, CHAR(13), ' '), CHAR(10), ' '))))) as AggregateValue

View 4 Replies View Related

Number Of SQL Queries Too Large?

Jul 28, 2004

Hi All,

I'm currently in the middle of building quite a large CMS using ASP.NET and MSSQL2K and have began to question if the amount of queries I am using for one page to be built is too many?

For one page (View Forum) I am getting all of the templates and checking access then pulling a list of threads, getting the first and last posts, then user info for the first and last posts... anyway to view 10 threads on the page the number of queries comes to about 54 and the page takes 0.064 seconds to load.

My question is, Is this to many queries to be running for a single page load? All queries are using Stored Procedures.

Thanks Guys.

View 3 Replies View Related

Large Number Of Connections

Jul 19, 2004

hi,
Does large number of connections to a sql server result in slogging queries??
by large number of connections I mean in the range of 2000 to 2500 connections to various databases on the same server.
This happens on the production database, if I download the copy of the database and run the specific queries on the local box it hardly takes 1 second to execute, whereas on the production box it takes about 45 to 60 secs, i m working on the indexes part... was not sure whether number of connections to a server affect the performance of the queries or it is just that the indexes need defragmentation...

View 2 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved