Attention: To all SSIS gods like Donald Farmer, Jamie Thompson, Michael Entin
I am so very disappointed with the performance of SSIS on Oracle. When I am doing a simple lookup, it is caching the entire lookup table that is killing my performance. What is worst? When I try to change from Full to Partial or None caching mode, the component throws an error (x) sign.
I am also so frustrated with the way I have warnings all over the place. The warnings are saying code page value for the column cannot be determined. But the code page value is defaulted to 1252. I looked around everywhere to find what the damn code page value of my Oracle database is without any luck.
The damn package that I created takes 10 minutes to process 10,000 records ONLY which is slower than a legacy Cognos Decision Stream!!! Unacceptable!!! My lookup tables on an average have 200,000 records and I DO NOT WANT TO CACHE THEM all for 10,000 records. Something seems messed up!!!
I am currently querying two databases - One in SQL Server and one Oracle, to find records which are in one but not the other (essentially a reconcilliation) this is working fine in MS Access, using 2 passthrough queries to return the results of the 2 databases, then another query to find the data in one but not the other.
I decided to try the linked server approach as I thought this would give me enhanced performance - but strangely enough when I query the oracle database from MS Access the query takes about 22 seconds - whereas when I query the linked server from SQL Server the same query takes about 55 seconds.
The only difernce I can see is the passthrough query in MS Access is connecting using ODBC, an the linked server is connecting using Microsoft OLE DB Provider for Oracle.
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
Hi, I have just upgrade my sql 6.5 to 7.0 version. This sql box (compaq proliant 5500) has 1 gig ram and 4 pentium pro processors and smart array controller. There are about 200 users hit on that server constantly. I configured this server according compaq white paper and Microsoft recommendation; however, I am still suffering huge performance hit. Then I setup performance monitor to see what is happening. What I see is all of 4 processors are at 70% processor time constantly. What I heard is sql 7.0 runs much better on pentium III processors. Is that true? Or you have any recommendations??
Hello Gurus,SQL Server 2000Windows 2003 Server, Standard Edition.Firstly I'm not a SQL server DBA but have a little experience withOracle 9i and Oracle Rdb.An application that I've inherited has started performing very veryslowly over the last few days, as far as I know there have been nomodifications or changes in the volume of data the db is holding. Thedatabase is in simple logging mode and it's updated twice per day fromand Oracle Rdb database.The SQL database is all in the primary filegroup and is sat on fourdisks (RAID mirrored) which form one logical disk of 128gb. Itconsists of approx 120 tables, each with a primary key and a number ofunique indicies per table.Apart from a twice a day update in the small hours the only access tothis database is read only. Users are connecting via the web andusing predefined Crystal reports to retrieve data.Questions then:Should this db have more than one filegroup and should I put theindicies in a different group? Is this relevant when the underlyingstorage is RAID?Should it be using mirrored RAID or should it be striped or should Isteer clear of RAID for a database?The indicies have not been rebuilt for some time (probably one year).Is this something I should be doing once per day/week as I do have thedowntime window?Should I be doing a database/table reorg at fairly regular intervals?I'm going to generate a workload file from SQL profiler and see what,in conjunction with the Index tuning wizard it suggests.Appreciate that poor performance & tuning is a huge subject and not anexact science but any tips or comments would be gratefully received.RegardsDave.
At server, I have SQL Server 2005 with a database running in compability mode 8.0 (SQL 2000). At desktops (cliente side) I have SQL Server 2005 Express. The base on the desktops access the base on the server to a copy of data through Linked Server. Both bases are in compability mode 8.0.
This is my problem:
The copy of data using a ad-hoc query INSERT INTO LOCAL-DATABASE ... SELECT .... FROM REMOTE-DATABASE (WAN) takes much more time than if I use MSDE in the desktops.
I would like to know what problem can be cause this delay.
Is there a way to improve this query? The execution plan states that it's ,aking use of the Indexes as 'Clustered Index Seeks' but the query takes 30 minutes to complete. The Index statistics are also up to date! If I use just an INNER JOIN the query completes in 2 seconds! How can I make the LEFT OUTER JOIN more optimal? SELECT A.MeterSerial AS 'PeaceMeter', B.MeterFrom, B.MeterTo, CASE WHEN A.MeterSerial = B.MeterFrom OR A.MeterSerial = B.MeterTo OR A.MeterSerial BETWEEN B.MeterFrom AND B.MeterTo THEN 'Y' ELSE 'N' END AS 'ComplexMeter', B.ComplexMeterType FROM Peace.MeterData_DR1 A LEFT OUTER JOIN (SELECT MeterFrom, MeterTo, ComplexMeterType FROM Peace.ComplexMeters2) B ON A.MeterSerial = B.MeterFrom OR A.MeterSerial BETWEEN B.MeterFrom AND B.MeterTo WHERE A.MeterSerial != ''
Hi,I am having a problem with one of my stored procedures in SQL Server 2005. Basically the proc brings back a data set for the ASP.NET front end, but it is running very slowly from .NET. I have run SQL profiler on the procedure and its taking around 20 seconds to bring back the data for the .NET, where as if I copy and paste the executed SP from profiler into the management studio and run it in a query window, it runs in around 1 second, even if I run DBCC DROPCLEANBUFFERS before I run it. More worryingly, the CPU usage is 40 times higher and the number of reads is 50% higher from .NET.We have the .NET front end spread over 3 clustered web servers with load balancers and the SQL db is on a dedicated rig. I am having the same problem on my locally published version of the site as well, so I don't think it's an issue with the web site.If anyone has got any ideas on this then please let me know as I am completely stuck. I should mention that the issue has only recently started occuring and it used to be fine and the rest of the site is fine...Thanks in advance Tom
I created a cube in my development box, tested, performance is great, now if I try to deploy it into the production server from VS, the screen freezes without any error msg... after some time playing around with security I gave up and created a backup of the cube, and sent it over to the IT department( I have no direct control over the production server so they restored the cube ). Now with the cube in production, it was a matter of just processing it... well that didn't go well either... it would take forever from my development box to see results... then I tried to browse any dimension using the data already on the backup... a simple 4 values dimension would take 2-4 minutes to load on screen...
I can't understand why browsing the cube its so slow in the production server, IT admin even reported that when trying to browse locally it would be slow too... The server has 16gbs of RAM and its a dual processor, he didnt notice any lack of CPU or memory while browsing the cube...
Have you experienced this or can you help me troubleshoot whats going on?
We just converted from SQL Server 2000 to SQL SERVER 2005 and it seems as though we are having trouble with our performance. Sets of queries that used to take about 15 seconds now take almost 2 minutes. We used a utility to find out what was taking so long and found that almost all of the wait time belonged to LCK_M_S. Does anyone have any suggestions?
Here is a snippet of code I used to test the speed against 2000:
declare @counter int set @counter = 1
while @counter < 200 BEGIN update UPR40500 set actindx = 96 where actindx = 96 set @counter = @counter+1 END
This takes 15 seconds when it used to take virtually no time at all.
Having some trouble with indexes on sql server 2005. I'll explain it with a simplified example. I have a customers table, and a sp to list customers :
create table Customers( CusID int not null, Name varchar(50) null, Surname varchar(50) null, CusNo int not null, Deleted bit not null )
create proc spCusLs ( @CusID int = null, @Name varchar(50) = null, @Surname varchar(50) = null, @CusNo int = null ) as
select CusID, Name, Surname, CusNo from Customers where Deleted = 0 and CusID <> 1000 and (@CusID is null or CusID = @CusID) and (@CusNo is null or CusNo = @CusNo) and (@Name is null or Name like @Name) and (@Surname is null or Surname like @Surname) order by Name, Surname
create nonclustered index ix_customers_name on customers ([name] asc) with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary
create nonclustered index ix_customers_surname on customers (surname asc) with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary
create nonclustered index ix_customers_cusno on customers (cusno asc) with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary
I've recently noticed that some tables, including 'Customers' don't have indexes except primary keys. And I have added indexes to "name", "surname" and "cusno" columns. This has dropped the number of IO reads. But the strange thing is; one time it works with name / surname searches like ('joh%' '%') but when CusNo is included, it does a full scan. And vice versa when the SP is recompiled using 'alter', works ok with CusNo, but not with name/surname. Recompile it, and it's reversed again. When run as a single query, the execution plan looks different.
What's happening? Perhaps something to do with statistics? This doesn't have a big payload on the server, but there are some other procs suffering from this on heavy queries, making server performance worse than before...
Using a query through a linked server is giving tremendously reduced performance. Is part of the problem linking from a SQL 2000 to a SQL 7.0 database? Are there any other tips out there? Thanks.
i need to select data by using a very complex sql statement. when i use a ole db source componente and choose SQL command as data access mode the process never ends. but when i put the sql statement in an sql task component it works fine and fast. isn't an oledb source always based on an sql statement (select *)? so how is it possible that this component becomes so slow?
I was hoping I wouldn't be another poster with performance issues after migrating to SQl 2005 from SQL 2000 but here I am.
I am in the process of testing out our databases on Sql Server 2005 for migration from SQL Server 2000 and there are certain portions of code that have been affected negatively. I have read thru many of the posts here and have tried out most of the recommendations. I will start out with things I've done and then provide the actual SQL.
1) I have rebuilt all indexes ( using the DBCC REINDEX using the table option). 2) Updated the db engine to latest hot fix (build 3239) that addresses speed related fixes. 3) I also ran sp_createstats using the 'fullscan' option to create stats on all columns of all tables (minus indexed columns) 4) Since nothing seemed to work, I even ran UPDATE STATICS with FULL SCAN on all tables even though I did not need it as the REBUILD woudl have created stats. But I was willing to try anything.
I have confirmed that the execution plans are different even though the data on both sql 2000 and sql 2005 are identical (i put a copy on 2005). The plans themselves are huge as the queries are huge. Here is the query.
SELECT InterimView.* ,TestView.*
FROM View_LabDataExport_TestFormData_55 TestView RIGHT OUTER JOIN ( SELECT ReqView.*, CDView.* FROM View_LabDataExport_FormData_55 ReqView LEFT OUTER JOIN View_LabDataExport_FormData_CD_55 CDView ON ( CDView.DB_SubjectID_CD = ReqView.DB_SUbjectID )
) InterimView
ON ( InterimView.DB_FormID = TestView.DB_FormID_T AND
The above query takes abotu 8 secs to run on 2000 and about 1 minute to run on 2005. This is for a small dataset and on larger datasets this is only going to more pronounced ( as confirmed by other teams that have already migrated in my company). Another point worth mentioning might be if I remove the TestView.* from the select list, it works in 5 to 6 seconds. Is there an issue with Sql 2005 and a large number of columns or anything of that sort? On 2000, the time remains the same , about 8 seconds if I remove this from the select list.
I'm experiencing performance problems with the merge join task. Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process. It makes me feel I don't use the abilities SSIS has to offer compared to DTS. Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.
Hi,I've been creating a db application using MS Access and MSDE. Only twoof us are using the application, and the server and the app both rungreat on my laptop (1.6 GHz Pentium M, 2GB RAM, W2KPro). Only problemis when I take my laptop home, my coworker loses access to the server.We recently purchased a dedicated server to run the db on at theoffice. It's a 2.8 GHz Dual Xeon, 2GB RAM, running XPPro. We alsobought SQL Server, but I installed the Personal Edition becuase we arenot using a server OS. It's my understanding that XP can utilize bothprocessors, and the Personal Edition can use both processors as well.(On a side note, why is Enterprise Manager showing that I have 4processors - why?) In addition, I understand PE has a work-loadgovernor that cripples performance when more than 5 TSQL commands arebeing run simultaneously.I backed up the db on my laptop and restored it on our new server. Butwhen I run the exact same queries with the exact same number of rows,my queries on the new server are take 3x longer(!?). Can someoneplease offer a few suggestions for why this is happening? What can Ido to improve performance on the server machine?Please let me know if I need to supply more information.Thanks,Alex
In one Data Flow Task (running by itself) I simply have a Raw File Source pushing rows to an OLE DB Command. This command executes an UPDATE command (UPDATE table SET field = ?, anotherfield = ? WHERE thisfield = ?) and performs extremely slow. It's possible to have 62K+ rows needing to be updated and it typically takes this task around 25 - 30 minutes to run.
Is there anything I can do to increase performance? Are there any options other than the OLE DB Command to perform updates?
The client production server CPU starts thrashing. Task manager indicates that SQL server is gobbling CPU cycles. Having a look at the replication monitor, it is obvious that indivual synchronisations to the mobile devices are taking significantly longer expected.
Observing an indivual synchronisation attempt, the Upload changes to Publisher rows are very quickly resolved.
The Download changes to Subscriber seems to take up a very long time.
Along the way, the estimated completion does a few interesting things, like going from 100% complete with no estimated time to complete, back to seomthing like 77% with 2 minutes left to complete.
This sort of behaviour occurs when there are only a hundred rows to download.
Synchronisations for minimal amounts of data suddenly taking anywhere from 2 to 15 minutes. Totally unacceptable form the client perspective but seems that 2005 behaves quite different from 2000 and the tricks are yet to reveal themselves.
Note - It is a server hardware issue as there is in excess of 3 GB ram, the database is on a SAN and there are 4 3Ghz CPUS in operation.
Any possible help appreciated as this issue is beginning to drag on.
The code below is from a nested view, which I've read should be avoided. I've also noticed GETDATE() is used, which I believe causes GETDATE() to be executed for every record selected (correct me if I'm wrong). I'm also guessing a JOIN containing a UNION against a SELECT statement is not a good idea. What other problems do you notice?
SELECT trans.Entry_Code, trans.D_C, trans.ADP_Security_# , trans.TRID, trans.Batch_Code, trans.Last_Money, null as Shares, Settle_date as Process_Date, null as Closing_Price, trans.Dwnld_Date, trans.Acnt, null as Mktval, cast(Null as varchar(20)) as Cusip_#, ACT.dbo.account.account_key AS account_key FROM (SELECT * FROM ADPDBBOOK.dbo.YTD05B WHERE (DATEDIFF(mm, Process_Date, GETDATE()) <= 15) UNION SELECT * FROM ADPDBBOOK.dbo.YTD06B) trans INNER JOIN ACT_DATA.dbo.account ON ACT_DATA.dbo.account.account_key = RIGHT(trans.Acnt, 5) INNER JOIN tbl_Accounts_TransactionalData ON trans.Acnt = tbl_Accounts_TransactionalData.Acnt
Greeting, below is the complete SQL taken from aspfaq.com (retrievedfrom this newsgroup I believe) The query takes about two minutes torun. Does anybody have a better set based way (sub-second response) todetermine business days?CREATE TABLE dbo.Calendar(dt SMALLDATETIME NOT NULL PRIMARY KEYCLUSTERED, -- Date valueIsWeekday BIT,-- Is this date a weekday (M -F)IsHoliday BIT,-- Is this date a holidayY SMALLINT,-- Year the date falls inFY SMALLINT,-- Fiscal Year (needed?)Q TINYINT,-- Quarter date falls inM TINYINT,-- Numeric month of dateD TINYINT,-- Numeric day of dateDW TINYINT,-- Numeric DayOfWeek(Sunda=1,Monday=2)MonthName VARCHAR(9),-- String name of monthDayName VARCHAR(9),-- String name of dayW TINYINT-- Week number)GO-- Start & End DatesDECLARE @StartDate DATETIMEDECLARE @EndDate DATETIMESET @StartDate = DATEADD(d, -1, '20000101')SET @EndDate = DATEADD(d, -1, '20300101')-- Total number of dates to generateDECLARE @Days INTSET @Days = DATEDIFF(DAY, @StartDate, @EndDate)-- Create temporary Numbers tableCREATE TABLE #Numbers(Number INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED)-- Insert a number into our temp table for each date to be generatedWHILE COALESCE(SCOPE_IDENTITY(), 0) <= @DaysBEGININSERT #Numbers DEFAULT VALUESEND-- Generate a date for each day in our timespanINSERT Calendar(dt)SELECT DATEADD(DAY, Number, @StartDate)FROM #NumbersWHERE Number <= @DaysORDER BY Number-- Remove the temporary Numbers tableDROP TABLE #NumbersGO-- Update other columnsUPDATE dbo.Calendar SETIsWeekday = CASE WHEN DATEPART(DW, dt) IN (1, 7) THEN 0ELSE 1 END,IsHoliday = 0,Y = YEAR(dt),FY = YEAR(dt),Q = CASEWHEN MONTH(dt) <= 3 THEN 1WHEN MONTH(dt) <= 6 THEN 2WHEN MONTH(dt) <= 9 THEN 3ELSE 4 END,M = MONTH(dt),D = DAY(dt),DW = DATEPART(DW, dt),MonthName = DATENAME(MONTH, dt),DayName = DATENAME(DW, dt),W = DATEPART(WK, dt)-- Query in question (takes almost 2 minutes to execute and return avalue)SELECTC.dtFROMCalendar CWHEREC.IsWeekDay = 1AND C.IsHoliday = 0AND 9 = (SELECT COUNT(*) FROM Calendar C2 WHERE C2.dt >=GETDATE() AND C2.dt <= C.dt AND C2.IsWeekDay = 1 AND C2.IsHoliday = 0 )
Poor Performance Temporarily Cured by Re-booting SQL Server
Please can you help?
SQL Server 2000. Accessed by VB.Net 2003. Server set up by SQL consultant (who is no longer available).
We have a regular problem, which is most noticeable when analysing data (rather than data inserts/deletions) for producing xml or csv files or reports, even though the amount of data is relatively small. However structure of database means that there are still quite a lot of records and a lot of links between tables (database is currently being redesigned but in the mean time the problem still exists).
The problem is that access slows down dramatically and in some cases all but stops (one example is a csv file being produced where only a few lines have been output in 30 minutes). In every case the symptoms can be solved by re-booting the SQL server, after which the entire csv file is output in 10 minutes. Obviously though we wish to solve the problem and not just the symptom.
Additional information: Transaction logs are backed up at 13:00 and in the evening (both periods of very low usage).Full backup is run in the evening and at 13:00 (both periods of very low usage), with integrity check performed prior to full backup. Database is optimized after the full backup including shrinking the database leaving 15% of the data space free.
I'm using the code below to send files that are in a blob file in my database to the browser client. The code sends the file in chunks in order to increase performance. The file I'm using to test with is 7MB. It works great on Windows XP with any browser. It takes virtually the same amount of time compared to downloading the file directly from the webserver. However, Windows 2000 and Mac OS X both take about 4x the amount of time it takes to download the file on XP machines. Why the performance difference? Is there anything I can do to fix this? I tried downloading the file directly from the webserver instead of getting it out of the database and it takes the same amount of time on all 3 OS. I had the same problem on Windows XP when I wasn't sending the file in chunks, but after using the code below, it started working for XP only.
Dim bufferSize As Integer = 24000 Dim outbyte(bufferSize - 1) As Byte Dim retval As Long Dim startIndex As Long = 0
Dim sql As String = "SELECT ..." Dim cmd As New SqlCommand(sql, conn) conn.open() Dim dr As SqlDataReader = cmd.ExecuteReader(CommandBehavior.SequentialAccess) If dr.Read() Then ' Reset the starting byte for a new BLOB. startIndex = 0
' Read bytes into outbyte() and retain the number of bytes returned. retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Current.Response.Clear() Current.Response.Buffer = True Current.Response.ContentType = "application/octet-stream" Current.Response.AddHeader("Content-Disposition", "attachment; filename=" & myfile" & "." & myextension)
Do While retval = bufferSize Current.Response.BinaryWrite(outbyte) Current.Response.Flush()
' Reposition the start index to the end of the last buffer and fill the buffer. startIndex += bufferSize retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Loop
'Write the remainder of the last chunk Dim remaining(retval) As Byte Array.Copy(outbyte, 0, remaining, 0, retval) Current.Response.BinaryWrite(remaining) Current.Response.Flush() Current.Response.Close() End If dr.Close() conn.Close()
Hello,I have a test database with table A containing 10,000 rows and a tableB containing 100,000 rows. Rows in B are "children" of rows in A -each row in A has 10 related rows in B (ie. B has a foreign key to A).Using ODBC I am executing the following loop 10,000 times, expressedbelow in pseudo-code:"select * from A order by a_pk option (fast 1)""fetch from A result set""select * from B where where fk_to_a = 'xxx' order by b_pk option(fast 1)""fetch from B result set" repeated 10 timesIn the above psueod-code 'xxx' is the primary key of the current Arow. NOTE: it is not a mistake that we are repeatedly doing the Aquery and retrieving only the first row.When the queries use fast-forward-only cursors this takes about 2.5minutes. When the queries use dynamic cursors this takes about 1 hour.Does anyone know why the dynamic cursor is killing performance?Because of the SQL Server ODBC driver it is not possible to havenested/multiple fast-forward-only cursors, hence I need to exploreother alternatives.I can only assume that a different query plan is getting constructedfor the dynamic cursor case versus the fast forward only cursor, but Ihave no way of finding out what that query plan is.All help appreciated.Kevin
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details: VB2008 Pro .NET Frameworks 2.0 SQL Compact Edition V3.5 Encryption = Engine Default Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
cmd.CommandText = "SELECT * FROM BackupDatabase" cmd.ExecuteNonQuery() Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1) rec.SetString(2, Full_Folder_Address1) rec.SetString(3, File1) rec.SetSqlString(4, File_Size_KB1) rec.SetSqlString(5, Schedule_To_Run1) rec.SetSqlString(6, Backup_Time1) rec.SetSqlString(7, Last_Run1) rec.SetSqlString(8, Result1) rec.SetSqlString(9, Last_Modfied1) rec.SetSqlString(10, Latest_Modfied1) rec.SetSqlBoolean(11, Backup_On_Next_Run1) rec.SetSqlString(12, Total_Backup_Times1) rec.SetSqlString(13, Server_File_Number1) rec.SetSqlString(14, Server_Number1) rec.SetSqlString(15, File_Break_Down1) rec.SetSqlString(16, No_Of_Servers1) rec.SetSqlString(17, Full_File_Address1) rec.SetSqlBoolean(18, OverWriteQuick) rec.SetSqlBoolean(19, CompressAns) rs.Insert(rec) Catch e As Exception MessageBox.Show(e.Message) Finally conn.Close() End Try End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
I got some performance problem when I try to transfer records from SQL Server 7 to Oracle 7 database using DTS. I am using a Ole DB connection on both side with the driver provide by Microsoft and it's really slow
Any tips or suggestion to increase the speed will be appreciate
Anyone have any suggestion on increasing the performance going over a linked server? IS this connection persistent? Or does it establish a connection with each reference? Any other Providers for the connection other than Microsofts OLE DB Provider for Oracle? I imagine that building index's on the referenced oracle tables would help?
We have Oracle-based enterprise resource planning (ERP) application that facilitates order entry, invoicing, dispatch, stock management, and purchasing. However, it took as much as 30 seconds to perform queries from the remote offices, which were linked via Frame Relay.
Any experiences or ideas shared will be greatly appreciated.
I have been reading some topics on "slow Oracle connections", but I have not found an answer for my question yet.
The problem is that I am extracting data from a Oracle database to Raw files (to import into our DWH later on). For one table this takes more that 1 hour! I find this to be very slow, but maybe I am expecting too much speed?
Can anyone tell me how long it should take to download 10 million rows from Oracle to Raw file (no transformations)?
Some extra information: - The table has 10 million rows and a little over 200 columns - The datatypes in the Oracle database are varchar, number and date - I tried to tune the buffer size in the data flow and found that I could fit about 1900 rows in a DefaultBuffersize of 100 MB - I use the "Native OLE DBMicrosoft OLE OB Provider for Oracle" (Provider=MSDAORA.1)
Can I do anything to get a better performance or is 1 hour for a table like this the best performance we can get?
(Changing the Oracle database is unfortunately not an option...)
I was told in one of my systems classes that the real performance bottleneck in accessing information from the database was the opening of a connection from the application to the database.
To combat that problem I was advised to use a Singleton Factory pattern and to have that Factory instaniate a connection and open it, then pass references to that connection for all of the objects that it created. All of those objects passed the connection reference to the objects they created and so on. Basically that meant that I only ever had one connection open at any one time for my entire application. And I was able to implement this solution at my previous job where I was developing in Oracle. I primarially used OracleCommands and OracleDataReaders to get the informaton into and out of the database. I thought this was a very nice solution. Having this many DataReaders accessing a single connection was not a problem because OracleConnections don't get locked from having more than one DataReader open at once.
At my current job, however, I use SQL Server. I am concerned that the single connection will not work in my new enviroment as the SQLDataReaders lock up the connection while they are using it. If the information that I recieved about opening connections being the real bottleneck, then I am hesitant to have a connection instanciated and opened for each method, but I am concerned that a whole lot of errors will be generated if I use the single connection method. Also, how do DataAdapters effect my decision of which approach to use.
Any advice would be most helpful. If you have any questions that would help answer just ask. Thanks.
Setup I am using a simple aspx page to calculate a person’s tax filing status and capturing statistical information. Most of the items are drop down boxes or radio buttons but a few are checkboxes.
Question My plan is to give each option in the check boxes its own column in the database this will lead to a table that is 70 columns wide. Is that to many? Most of the info will be very small like “yes� or “no�. I will be using a stored procedure to insert the info. This will be coming down a VPN
Goal I would like to give the admin staff here real time access to the data via link tables and such. I would like there to be only one table so it will be easy for them to pars the info quickly without needing to join tables or write unions. Ideally this would get me out of the business of creating one off reports.