Querying 9.5 Millions Records With No Primary Key

Sep 4, 2007



Hello all
quick question-- Im looking for the most effiecient way to extract data daily from a table with some 9.5 mill records and growing. These are transaction records and ideally I would like to bring over the last days transactions and add them to my existing table. I cannot use the transaction date as sometime we have to operate in an "Offline" mode where the records are brought over sometime later. This could be days are unfortunetaly a week or more. there are some 30 fields in the transaction table so is there a more efficient way to do this simply creating a concatenated key?? Would it be more effiecient to drop and recreate the table daily? that sounds extreme so wanted to get a few ideas.


Thanks in Advance

km

View 2 Replies


ADVERTISEMENT

DTS That Does HouseHolding With Millions Of Records

Sep 3, 2006

Hi all,I was given a task to create a houseHolding logic under a table thathave millions records.first let me explain what is a house holding:let's say I have 2 records that have the same phone number, that meanthat both records are under the same household, but this can get morecomplicatedthis article explain ithttp://www.teradata.com/t/page/115924/index.htmlif anyone worked with household he knows that you need to scan thetable many time to get all the house holds, I used a dts to do it.I tested the dts on 11 records like the article did and that workgreat, but once I went to million records each loop is taking me 2 houror so....a and I have no idea how how many loops I will have to do.if anyone out there worked with household queries and used sql, yourimput would help me allotthanks.

View 3 Replies View Related

Migration Millions Of Records

Aug 19, 2007

Hi

Please guide me for the below given problem.


While doing migration by using cursors for the below given sample data its taking more hours to complete the process. Therefore want to know is there any way I can do it in simple query.


ACNo Amount Balance CalType
A001 10 10 +
A001 10 20 -
A001 40 40 +
A001 10 30 -
A002 90 90 +
A002 20 110 +
A002 40 150 +
A003 10 30 +
A003 10 40 +
A003 10 30 -
A004 40 40 +
A004 10 30 -


Iam having Amount value alone and Balance has to be calculated value based on CalType. At the same time the Balance has to be reset as 0 when AcNo has changed.


Please guide me for the faster approach.


Regards,
Mohanraj

View 3 Replies View Related

How Can I Take Millions Of Records Without Time Out Error?

Sep 19, 2007

Hi,There are about 30 millions records on my mssql server and I want to access 2 million of them at one time.  However, when I try to access with sql command I get time out error. I want to select first 100 record and select the other 100 and so on. May I obtain this?For example;select * from tbl_Customer where name = @name_  ->time out errorSomeone has said that you can solve this problem with < cursors > but I can't find enough article. Thanks... 

View 3 Replies View Related

Which Is The Fastest Way To JOIN Having Millions Of Records?

Mar 15, 2004

If there is 13 million records in one table and 40 thousand records in another table then what is the fastest way of joining these two tables????

This was a question to me from somebody to which i cudn't answer back properly. Cud anybody tell the answer with properreasons behind the answer??????

Thanx.

View 7 Replies View Related

Query To Delete Millions Of Records

Jul 23, 2012

i have a query to delete millions of records. I whant to delete in batches of a 1000. My Select join statement will return millions of records so this takes alot of time how to i select a 1000 records delete everything that his not in those record and loop and not select the same records again.Here is what i have :

DECLARE @i INT
WHILE (1=1)
BEGIN
BEGIN TRAN
DELETE TOP(1000) FROM dbo.ABC123
WHERE SUBSTRING(dumbdumb,1,8) NOT IN

[code]....

View 3 Replies View Related

25 Millions Records Insert ,takes More Than 3 Hrs

May 14, 2008

Hi Teachies,

I am using SQL Server Standard Edition .

in one of my table , there i am inserting around 25 millions records and that takes time around more than 3 hrs.

same thing is happening while fetching records from that table.

this database contains only single file group i.e primary

and that table contains .. Clustered as well as non clustered index.

it doesnot have any Triggers.

How do i increase this performance.

Paritioning of table cannot be use in SQL Server Standard Edition.

Or Dropping all non clustered index before insert operation will improve my performance.

Please suggest me.

Thanks

Rajesh Varma

View 2 Replies View Related

Millions Records Archiving And Delete

Feb 27, 2007

The iussue:Sql 2KI have to keep in the database the data from the last 3 months.Every day I have to load 2 millions records in the database.So every day I have to export (in an other database as historical datacontainer) and delete the 2 millions records inserted 3 month + one day ago.The main problem is that delete operation take a while...involvingtransaction log.The question are:1) How can I improve this operation (export/delete)2) If we decide to migrate to SQL 2005, may we use some feature, as"partitioning" to resolve the problems ? In oracle I can use the "truncatepartition" statement, but in sql 2005, I'm reading, it cant be done.This becouse we can think to create a partition on the last three mounts tosplit data. The partitioning function can be dinamic or containing afunction that says "last 3 months ?" I dont think so.May you help usthank youMastino

View 5 Replies View Related

Comparing Millions Of Records From File A And B

Sep 11, 2006

I'm trying to compare about 28 million records (270 length) from table A and B using the Lookup task as described in this forum. The process works fine with about two million records or so on my desktop ( p.4 3.39GHz, 1.5 GB Ram), but hangs with the amount of data I'm trying to process. I tried using full and partial caching, but to no avail. I'm thinking this is a hardware resource problem. So, does anyone has any recommendation on the hardware needed for this kind of operation and/or suggestion? Thanks in advance...

View 8 Replies View Related

Inserting 25 Millions Records SQL Server Takes 3hr

May 19, 2008

Hi Teachies,



I am using SQL Server Standard Edition with good HardWare configuration.

In one of table i am inserting around 25 millions records and that takes time around more than 3 hrs.

same thing is happening while fetching records from that table.

this database contains only single file group i.e primary

and that table contains .. Clustered as well as non clustered index.

it doesnot have any Triggers.

How do i increase this performance.

Paritioning of table cannot be use in SQL Server Standard Edition.

Or Dropping all non clustered index before insert operation will improve my performance.

Please find the details ..

SERVER CONFIGURATION:

Intel pentium (R) 4 CPU
2.88 GHZ,
2.79 GHZ ,2 GB RAM

Operating System: WINDOWS 2003 R2 STANDARD SERVICE PACK 2



Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86)

Oct 14 2005 00:33:37

Copyright (c) 1988-2005 Microsoft Corporation

Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)





DAtABASE DETAILS:


MDF and LDF located on C: Drive

Available Space on C: DRIVE 2.94 GB




TABLES DETAILS

CREATE TABLE [dbo].[TIX_PAYMENT_SCHEDULE](

[PaymentScheduleId] [bigint] IDENTITY(1,1) NOT NULL,

[OwedAmountId] [int] NULL, --NonClusteredIndex

[ProposalId] [int] NOT NULL, --NonClusteredIndex

[BrandId] [int] NULL, -- NonClusteredIndex

[DueDate] [datetime] NULL, --NonClusteredIndex

[OverdueDate] [datetime] NULL,--NonClusteredIndex

[ExpectedAmount] [decimal](18, 2) NULL,

[TransactionStatusId] [tinyint] NULL,--NonClusteredIndex

[IsLate] [char](1) NULL,

[IsPaymentReceived] [char](1) NULL,

[ScheduleBatchJournalId] [bigint] NULL,--NonClusteredIndex

[IsValidSchedule] [char](1) NULL,

[RuleId] [int] NULL,

[ActionId] [int] NOT NULL,

[ReasonId] [tinyint] NULL,

[Comments] [nvarchar](2000) NULL,

[NoofDays] [int] NULL,

[ActualAmountReceived] [decimal](18, 2) NULL,

[CreatedBy] [uniqueidentifier] NULL,

[CreatedDateTime] [datetime] NOT NULL,

[LastUpdatedBy] [uniqueidentifier] NULL,

[LastUpdatedDateTime] [datetime] NOT NULL,

[CaseScheduleId] [bigint] NULL,--NonClusteredIndex

[ActionDate] [datetime] NULL,

[HasExactMatch] [char](1) NULL,

[IsCatchupBalanced] [char](1) NULL,

[HasModified] [char](1) NULL,--NonClusteredIndex

[PendDate] [datetime] NULL,

[IsAutoAccept] [char](1) NULL,

[CatchupBalanceIdentifier] [uniqueidentifier] NULL,--NonClusteredIndex

CONSTRAINT [PK_TIX_PAYMENT_SCHEDULE] PRIMARY KEY CLUSTERED

(

[PaymentScheduleId] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]



TABLE 2



CREATE TABLE [dbo].[TIX_PAYMENT_CASE_SCHEDULE](

[CaseScheduleId] [bigint] IDENTITY(1,1) NOT NULL,

[ProposalId] [int] NOT NULL,--NonClusteredIndex

[DueDate] [datetime] NOT NULL,

[OverDueDate] [datetime] NOT NULL,

[TotalExpectedAmount] [decimal](18, 2) NOT NULL,

[TotalActualPaymentReceived] [decimal](18, 2) NOT NULL,

[TransactionStatusId] [int] NOT NULL,--NonClusteredIndex

[ActionId] [int] NULL,

[CreatedBy] [uniqueidentifier] NULL,

[CreatedDateTime] [datetime] NULL,

[LastUpdatedBy] [uniqueidentifier] NULL,

[LastUpdatedDateTime] [datetime] NULL,

[IsValidSchedule] [char](1) NULL,

[ScheduleBatchJournalId] [bigint] NULL,

[IsCatchupBalanced] [char](1) NULL,

[HasModified] [char](1) NULL,--NonClusteredIndex

[CatchupBalanceIdentifier] [uniqueidentifier] NULL,

CONSTRAINT [PK_TIX_PAYMENT_CASE_SCHEDULE] PRIMARY KEY CLUSTERED

(

[CaseScheduleId] ASC

)WITH (PAD_INDEX = ON, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

GO



STORED PROCEDURE:





CREATE PROC [dbo].[TIX_PRC_GENERATE_PAYMENTSCHEDULE_DATA]
(
@XMLParams XML,
@ToDate datetime,
@HasModified char(1)

)
AS
BEGIN
SET NOCOUNT ON
--Exception Handling Variable Declaration
DECLARE @ErrorMessage NVARCHAR(200),
@ErrorNumber INT,
@ErrorSeverity INT,
@ErrorState INT,
@ErrorProcedure NVARCHAR(50),
@ErrorLine INT,
@ErrorDesc NVARCHAR(100)

DECLARE @XMLPayment INT
BEGIN TRY
IF @XMLParams IS NOT NULL
BEGIN --BEGIN IF


SET @ErrorDesc='Error Occured While Inserting into TIX_PAYMENT_SCHEDULE FROM XML'

INSERT INTO TIX_PAYMENT_SCHEDULE
(
OwedAmountId,
ProposalId,
BrandId,
DueDate,
OverdueDate ,
CreatedDateTime,
LastUpdatedDateTime,
ExpectedAmount,
ActualAmountReceived,
ScheduleBatchJournalId,
RuleId,
TransactionStatusId,
ActionId,
IsLate,
IsPaymentReceived ,
IsValidSchedule,
--Added by DC : 119
IsCatchupBalanced,
CatchupBalanceIdentifier,
HasModified
---------------------------------------------------
)
SELECT
Main.ELEMENT.value('(OwedAmountId)[1]','int') AS OwedAmountId,
Main.ELEMENT.value('(ProposalId)[1]','int') AS ProposalId,
Main.ELEMENT.value('(BrandId)[1]','int') AS BrandId,
convert(datetime,Main.ELEMENT.value('(DueDate)[1]','varchar(100)')) AS DueDate,
convert(datetime,Main.ELEMENT.value('(OverdueDate)[1]','varchar(100)')) AS OverdueDate,
@ToDate AS CreatedDateTime,
@ToDate AS LastUpdatedDateTime,
convert(decimal(18,2),Main.ELEMENT.value('(ExpectedAmount)[1]','varchar(100)')) AS ExpectedAmount,
convert(decimal(18,2),Main.ELEMENT.value('(ActualAmountReceived)[1]','varchar(100)')) AS ActualAmountReceived,
Main.ELEMENT.value('(ScheduleBatchJournalId)[1]','bigint') AS ScheduleBatchJournalId,
Main.ELEMENT.value('(RuleId)[1]','int') AS RuleId,
Main.ELEMENT.value('(TransactionStatusId)[1]','int') AS TransactionStatusId,
Main.ELEMENT.value('(ActionId)[1]','int') AS ActionId,
Main.ELEMENT.value('(IsLate)[1]','char(1)') AS IsLate,
Main.ELEMENT.value('(IsPaymentReceived)[1]','char(1)') AS IsPaymentReceived,
Main.ELEMENT.value('(IsValidSchedule)[1]','char(1)') AS IsValidSchedule

--Added by DC for 119

,Main.ELEMENT.value('(IsCatchupBalanced)[1]','char(1)') AS IsCatchupBalanced
,Main.ELEMENT.value('(CatchupBalanceIdentifier)[1]','nvarchar(1000)') AS CatchupBalanceIdentifier
,@HasModified
---------------------------------------------------------------------

FROM @XMLParams.nodes ('(/ROOT/DATA)') AS Main(ELEMENT)

END--END IF


END TRY--Main END TRY
BEGIN CATCH --Main BEGIN CATCH



SELECT @ErrorMessage = @ErrorDesc+Char(13)+Error_Message(),
@ErrorSeverity = Error_Severity(),
@ErrorState = Error_State(),
@ErrorNumber = Error_Number(),
@ErrorProcedure = Error_Procedure(),
@ErrorLine = Error_Line()
RAISERROR(
@ErrorMessage,
@ErrorSeverity,
@ErrorState,
@ErrorNumber,
@ErrorProcedure,
@ErrorLine
)
END CATCH --Main END CATCH
END --Main END





STOREDPROCEDURE 2



CREATE PROCEDURE [dbo].[TIX_PRC_GET_PAYMENTSCHEDULE_SCHEDULE_FOR_DATE_RANGE]
(

@ToDate datetime,
@IsValidSchedule char(1)

)
AS
BEGIN
SET NOCOUNT ON

--Execption Handling Variable Declaration
DECLARE @ErrorMessage NVARCHAR(200),
@ErrorNumber INT,
@ErrorSeverity INT,
@ErrorState INT,
@ErrorProcedure NVARCHAR(50),
@ErrorLine INT,
@ErrorDesc NVARCHAR(100)


BEGIN TRY --Exception Handling
SET @ErrorDesc='Error Occured while fetching records from TIX_PAYMENT_SCHEDULE'


SELECT
PaymentScheduleId,
OwedAmountId,
ProposalId,
DueDate,
OverdueDate,
ExpectedAmount,
TransactionStatusId,
IsPaymentReceived,
IsLate,
ActionId,
ActualAmountReceived,
IsValidSchedule,
BrandId,
CaseScheduleId,
ReasonId,
Comments,
NoOfDays,
ActionDate,
IsCatchupBalanced,
CatchupBalanceIdentifier,
HasModified
from TIX_PAYMENT_SCHEDULE with (nolock)
WHERE DUEDATE <=@ToDate AND IsValidSchedule=@IsValidSchedule


SELECT DISTINCT OwedAmountId,proposalId,brandId from TIX_PAYMENT_SCHEDULE with (nolock) WHERE DUEDATE <=@ToDate AND IsValidSchedule=@IsValidSchedule Order By OwedAmountId,ProposalId,BrandId asc
SELECT DISTINCT ProposalId from TIX_PAYMENT_SCHEDULE with (nolock) WHERE DUEDATE <=@ToDate AND IsValidSchedule=@IsValidSchedule Order By ProposalId asc


END TRY
BEGIN CATCH
SELECT @ErrorMessage=@ErrorDesc+CHAR(13)+ Error_Message(),
@ErrorNumber=Error_Number(),
@ErrorState=Error_State(),
@ErrorProcedure=Error_Procedure(),
@ErrorLine=Error_Line(),
@ErrorSeverity=Error_Severity()

RAISERROR(
@ErrorMessage,
@ErrorSeverity,
@ErrorState,
@ErrorNumber,
@ErrorProcedure,
@ErrorLine
)

END CATCH

END





Thanks & Regards

Rajesh Varma

View 6 Replies View Related

Deleteing Millions Records From A Table-faster Way?

Jun 21, 2007

hi!
i am using sql 2005 k with sp1.

I want to delete 30-40 million rows from a transactional table. Whats the fastest way to delete these rows. just to delete 300,000 rows it takes 30 min. also i don't want to truncate the table.

any help would be appreciated

Thanks

View 9 Replies View Related

25 Millions Records Insert Operations Takes Around More Than 3 Hrs

May 14, 2008

Hi Teachies,


I am using SQL Server Standard Edition with good HardWare configuration.

In one of table i am inserting around 25 millions records and that takes time around more than 3 hrs.

same thing is happening while fetching records from that table.

this database contains only single file group i.e primary

and that table contains .. Clustered as well as non clustered index.

it doesnot have any Triggers.

How do i increase this performance.

Paritioning of table cannot be use in SQL Server Standard Edition.

Or Dropping all non clustered index before insert operation will improve my performance.

Please suggest me.

Thanks

Rajesh Varma



View 17 Replies View Related

Accessing 2 Millions Records Using Full Text Searc

Apr 25, 2008

Hi,
My full text search on 2 millions records is taking time to show the result.
I have created full text catalog in RAM drive to make the retrival process faster.
But still its taking more than 1 minute to get the matching pattern.
I am using SQL server 2005. I have 2 columns (id,text) in my table..

This is my unique index script

USE [SAMPLE]
GO
CREATE UNIQUE NONCLUSTERED INDEX [ui_productid] ON [dbo].[Products]
(
[id] ASC
)WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]..

This is my primary key index script..
USE [SAMPLE]
GO
ALTER TABLE [dbo].[Products] ADD CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]..

This is my query..

SELECT D.[id], D.productname
FROM dbo.Products AS D
WHERE CONTAINS(productname, 'ford')

What should i do to show the result in 3-4 seconds.

View 1 Replies View Related

Querying The Primary Key For Performance

Sep 15, 2006

Hello, I am developing my first software product. I am trying to gain high database performance since this will be my main selling point.

My database will contain tens of thoudands of rows and worried that if i make a query the reponse will be too slow to be prectical.

so my question is:

if i made one query on my database and it returned 10 results -- would'nt it be faster if I made 10 queries using the primary key?

Will appreciate any responses.

View 4 Replies View Related

Querying Systables To Determine Primary Keys

Apr 1, 2004

Does anyone know how I could query the systables or perhaps use the information schema views to determine which columns of tables are the primary keys? Any help would be greatly appreciated.

View 4 Replies View Related

SQL 2012 :: Accurate Sorting Data Each Time With Millions Of Records Without Time Field?

Apr 25, 2014

Sample Table

USE [Testing]
GO
/****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON

[Code] ....

It seems to work fine with one million records.

Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.

View 7 Replies View Related

Querying Top Records

Feb 25, 2008

I have a table in my database that records lab tests for patients where I work. There's colums for the PatientID, TestID, and TestDate. Each patient can have multiple tests.

I need to pull the top 4 tests, by TestDate, for each patient in the table. I've been trying to do it with the Select Top(4) clause but no luck so far. Any help would be appreciated.

Thanks.


View 4 Replies View Related

How To Automatically Create New Records In A Foreign Table When Inserting Records In A Primary Table.

Sep 13, 2006

Ok, I'm really new at this, but I am looking for a way to automatically insert new records into tables.  I have one primary table with a primary key id that is automatically generated on insert and 3 other tables that have foreign keys pointing to the primary key.  Is there a way to automatically create new records in the foreign tables that will have the new id?  Would this be a job for a trigger, stored procedure?  I admit I haven't studied up on those yet--I am learning things as I need them. Thanks. 

View 4 Replies View Related

Inserting Records On Primary Key

Feb 16, 2006

I'm getting a foreign key error as well as Indentity Insert error when I'm trying to run a stored proc that inserts values from another table..help!!

View 4 Replies View Related

Delete Records When You Have A Primary Key Of Two Columns

Jun 12, 2006

Hi everyone.

I have two tables: the catalog table and the detail table.
The two tables are joined by a two-columns key.

I want to make a Delete sentence for delete all the rows in the Catalog table that aren't in the detail table, in SQL Server.

The only problem is that the key is composed by two colums.

If the Key was maded of one column, that would be easy, like this:

DELETE FROM CATALOG
WHERE CATALOGKEY NOT IN (SELECT CATALOGKEY FROM DETAIL)

But it is possible to make a delete sentence if the key has two columns ?


Thanks

View 2 Replies View Related

Delete Duplicate Records Without Primary Key

Feb 5, 2012

I want to delete the duplicate record from a table keeping 1 record aside.

My base table is-info

idnameclass
2abc6a
3abc6a
4abc6a
1abc6a
2abc6a
4abc6a
4abc6a
3abc6a
3abc6a
1abc6a
2abc6a
5abc6a

id-int
name-text
class-varchar
(there is no primary key in this table)

Now I want the result in following way:
idnameclass
2abc6a
3abc6a
4abc6a
1abc6a

I have tried the following query and its running fine but its not a dynamic stuff.

DELETE top (SELECT COUNT(*)-1 FROM aaa WHERE id ='3') --or put some number
FROM aaa
WHERE id ='3'-- or put some number

View 14 Replies View Related

Insert Records Into Gaps In Primary Key

Feb 23, 2006

Hi to all,
I'm a new member here and i would like to ask for some help regarding my problem. first i ahve an incremental primary key with format to something like this: 001-01-001, 001-01-002, 001-01-003, etc. My problem is that i want to insert (supply) the 'missing' or 'gaps' in my primary key field like for example: ..., 001-01-067, 001-01-068, 001-01-070. i want to insert the value 001-01-069 after the record 001-01-068. I have several gaps some ranging from several numbers like 005-04-007,005-04-020 which has a 13 records gap. Is there a way for stored procedure to solve this one?Thanks in advance.

View 8 Replies View Related

Syntax To Add Records If Primary Key = List

Sep 21, 2005

I am new to SQL administration.[color=blue]>From a list of IDs that are the primary key in one table (i.e. Customer[/color]Table), I want to make changes in tables that use those IDs as aforeign key.Basically I want to say:If fk_ID is in list [1,2,3,4,5] thendo these statements to that recordEnd ifWhere do I begin?Thanks for help with this low-level view of SQL programming.-tom

View 8 Replies View Related

Exporting Tables Along With Their Primary And Forgen Keys And Records

Jun 27, 2007

Hi all,


I’m trying to export 120 tables from SQL server 2000 to SQL server 2005 with their Primary and corresponding records.
Is there way to do this?

Thanks for any help.

Abrahim

View 6 Replies View Related

Shouldn't The Order Of Records Be Based On The Key Or Primary Index?

Jun 11, 2007

I upsized an access database with a key / index on ordernumber and linenumber.

However if I open the table in the Management Studio the records aren't ordered this way (same goes for select * from table) I get:










Ordernumber
Linenumber

200724001
37

200724004
3

200724006
33

200724001
3

200724011
19

200724014
5

200724006
37

200724011
19

200724006
28



Same goes for my crystal reports files, since the records aren't ordered by ordernumber / linenumber all my formulas go bezerk..



Am I wrong thinking the records should be ordered according to the prim. index?

Please help because I don't want to have to change all my 40+ reports to include an "ORDER BY"



Best regards,



Mike

View 7 Replies View Related

Inserted Records Missing In Sql Table Yet Tables' Primary Key Field Has Been Incremented.

Jun 18, 2007

I have a sql sever 2005 express table with an automatically incremented primary key field. I use a Detailsview to insert new records and on the Detailsview itemInserted event, i send out automated notification emails.
I then received two automated emails(indicating two records have been inserted) but looking at the database, the records are not there. Whats confusing me is that even the tables primary key field had been incremented by two, an indication that indeed the two records should actually be in table.  Recovering these records is not abig deal because i can re-enter them but iam wondering what the possible cause is. How come the id field was even incremented and the records are not there yet iam 100% sure no one deleted them. Its only me who can delete a record.
And then how come i insert new records now and they are all there in the database but now with two id numbers for those missing records skipped. Its not crucial data but for my learning, i feel i deserve understanding why it happened because next time, it might be costly.

View 5 Replies View Related

How To Find Missing Records From Tables Involving Composite Primary Keys

Nov 23, 2006

Table 1







     Code
    Quarter

500002
26

500002
27

500002
28

500002
28.5

500002
29

 

Table 2







     Code
           Qtr

500002
26

500002
27

 

I have these two identical tables with the columns CODE & Qtr being COMPOSITE PRIMARY KEYS

Can anybody help me with how to compare the two tables to find the records not present in Table 2

That is i need this result







    Code
   Quarter

500002
28

500002
28.5

500002
29

I have come up with this solution

select scrip_cd,Qtr,scrip_cd+Qtr from Table1 where
scrip_cd+Qtr not in (select scrip_cd+qtr as 'con' from Table2)

i need to know if there is some other way of doing the same

Thanks in Advance

Jacx

View 3 Replies View Related

Inserting Millions Of Rows

May 26, 2004

hi everybody,

i have an application that generate a lot rows from 1 mellion to 2 mellions rows
i wana insert this record in MS SQL server in a fast way

i am currentlly loop through this records while it is loaded in dataset
building a command text that generate insert query for each row
and run it against SQL server

but it takes a lot of time to be finished
is there r a way to bulk insert this data?

thanks 4 ur help.
Bolos

View 7 Replies View Related

Storing Millions Of Tiles

Jul 27, 2007

Hi everyone,

I have been trying to store millions of files (layer tiles) in NTFS now for a while with little success as the read/write speed drops off the chart or NTFS itself gets corrupted. My questions are:

1) Is there any way to store millions of files successfully in NTFS?
2) What does VE use to store its tiles (I am guessing SQL Server)?
3) If VE does not use SQL Server to store tile images, has anyone tried it and what are the pros/cons?

Any help would be greatly appreciated...

Thanks in advance,

Matt

View 2 Replies View Related

10 Millions Rows - 45 Minutes??

Feb 21, 2008



Hi everyone - I have an ETL package which loads about 10 million rows from SQL 2005 staging tables to new, empty tables (no indexes or constraints) in another SQL 2005 DB to be SWITCHED into the main partioned data tables.

Both databases reside on the same SQL Server instance - it is a dev server so the disk aren't super fast/SAN speeds but it has plenty of RAM/CPU & SCSI disks.

The insert takes about 45 minutes - can I get this working any faster?? or is this typical for 10 million rows?? I've messed about withe the data flow a few times but I can't seem to get any significant improvements.

Any tips anyone??

I perform several lookups on dimensions - these are not cached.

I do query the source table concurrently with different WHERE clauses & run two pipelines processing the data into 2 destination tables.

Would it be better to query the base table once & use a conditional split instead of the two separate queries??

I also mulicast from each pipeline & use a UNION ALL to log some of the rows from each pipeline to anther destination table.

Hope this makes sense?? Any ideas or tips on how I can speed up this kinda transform would be appeciated..

I'm using oleDB connections.

Hope ths makes some kinda sense!! Thanks for any advice!!

Sinister Pengiun

View 5 Replies View Related

Retrieving And Sorting Millions Of Rows

Oct 11, 2007

Hello,Currently we are in the process of implementing a sql server database where couple tables will have millions of rows ( about 98 millions and will grow) and a web site that will retrieve and sort the data ( read only). How asp.net gridview and sqldatareader act situation like that? Will it be a very slow response? Is there any alternative? Is there any example on the net?
Assuming tables are well tuned and well indexed.
Thank you in advance.

View 4 Replies View Related

Inserting Millions Of Rows Into A Table

Aug 10, 2006

Hi,

I have a DataTable in memory and I want to write a C# code to dump the data into a SQL database. Is there a faster way of dumping millions of rows into a SQL table besides running INSERT INTO row by row?

Thank you,

Jina

View 3 Replies View Related

SQL Server Admin 2014 :: Few Record Loss In Table Primary Key Where Same Records Exists In Foreign Key Table?

Jun 21, 2015

Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved