I have a table which has around 132000 rows with 24 columns. I use rda.pull download the data to PDA. For query these data, I must create a index on 5 character columns.
The data download time is good enough, around 4 mins. But it takes 12mins to create the index.
Please help to give me any idea on how to improve the whole synchroniztion speed. Thanks!
i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
I have a table (F_POLICY_TRANSACTION).This table has a couple of million rows in it.I am using a column named POLICY_TRANSACTION_BKEY to select records to delete (approximately 750k using the code below)This column has a non-clustered index applied..This is the code I have used:
WHILE 1 = 1 BEGIN DELETE TOP(50000) FROM F_POLICY_TRANSACTION with (tablockx)
[code]....
Problem is, it takes around 10 minutes to run.Is there any way it can be made more efficient?I have tried varying the rowcount with no success
I have a pretty large DB and a fairly complex query. If I drop buffers and clear cache the query runs in 20 seconds returning 25K rows. Subsequent runs are 2 seconds. Is this the result of the results being cached, execution being cached, other? Are there good ways to close the gap between the initial and later runs? Does the cache stay present until the service restarts or does SQL recycle the memory and if so, based on what criteria?
is there anyway to improve this statement. make it short? any input will be appreciated.
update TempAddressParsingTable set ad_unit = case
when right(rtrim(ad_str1),3) like 'apt' or right(rtrim(ad_str1),3) like 'lot' and substring(reverse(ad_str1), 4,1) in ('', ',', '.') then right(ad_str1,3)
when right(rtrim(ad_str1),4) like 'unit' or right(rtrim(ad_str1),4) like 'apt%' or right(rtrim(ad_str1),4) like 'lot%' and substring(reverse(ad_str1), 5,1) in ('', ',', '.') then right(ad_str1,4)
when right(rtrim(ad_str1),5) like 'unit%' or right(rtrim(ad_str1),5) like 'apt%%' or right(rtrim(ad_str1),5) like 'lot%%' and substring(reverse(ad_str1), 6,1) in ('', ',', '.') then right(ad_str1,5)
when right(rtrim(ad_str1),6) like 'unit%%' or right(rtrim(ad_str1),6) like 'apt%%%' or right(rtrim(ad_str1),6) like 'lot%%%' and substring(reverse(ad_str1), 7,1) in ('', ',', '.') then right(ad_str1,6)
when right(rtrim(ad_str1),7) like 'unit%%%' or right(rtrim(ad_str1),7) like 'apt%%%%' or right(rtrim(ad_str1),7) like 'lot%%%%' and substring(reverse(ad_str1), 8,1) in ('', ',', '.') then right(ad_str1,7)
when right(rtrim(ad_str1),8) like 'unit%%%%' or right(rtrim(ad_str1),8) like 'apt%%%%%' and substring(reverse(ad_str1), 9,1) in ('', ',', '.') then right(ad_str1,8)
when right(rtrim(ad_str1),9) like 'unit%%%%%' or right(rtrim(ad_str1),9) like 'apt%%%%%%' and substring(reverse(ad_str1), 10,1) in ('', ',', '.') then right(ad_str1,9)
when right(rtrim(ad_str1), 2) like '#%' and substring(reverse(ad_str1), 3, 1) in ('', ',', '.') then right(ad_str1, 2) when right(rtrim(ad_str1), 3) like '#%%' and substring(reverse(ad_str1), 4, 1) in ('', ',', '.') then right(ad_str1, 3) when right(rtrim(ad_str1), 4) like '#%%%' and substring(reverse(ad_str1), 5, 1) in ('', ',', '.') then right(ad_str1, 4) when right(rtrim(ad_str1), 5) like '#%%%%' and substring(reverse(ad_str1), 6, 1) in ('', ',', '.') then right(ad_str1, 5) else null end
I should add an Identity field (Identity=True) and a row version field(timestamp) to my table, and avoid to arrange tables into different databases, is it true in general?
Dear Sql Server experts:First off, I am no sql server expert :)A few months ago I put a database into a production environment.Recently, It was brought to my attention that a particular query thatexecuted quite quickly in our dev environment was painfully slow inproduction. I analyzed the the plan on the production server (itlooked good), and then tried quite a few tips that I'd gleaned fromreading newsgroups. Nothing worked. Then on a whim I performed anUPDATE STATISTICS on a few of the tables that were being queried. Thequery immediately went from executing in 61 seconds to under 1 second.I checked to make sure that statistics were being "auto updated" andthey were.Why did I need to run UPDATE STATISTICS? Will I need to again?A little more background info:The database started empty, and has grown quite rapidly in the lastfew months. One particular table grows at a rate of about 300,000records per month. I get fast query times due to a few well placedindexes.A quick question:If I add an index, do statistics get automatically updated for thisnew index immediately?Thanks in advance for any help,Felix
need help improve my update PROCEDURE i have this update PROCEDURE and it work ok but it is problematic because i have to add all the time a new category 101,102,103,104,105,106 ........................ all the time i have to to rewrite the stored procedure and add new category
so my question is how to improve my update PROCEDURE like for example create a table index
row_fld fld_index fld_text --------------------------------------------------- 1 101 a 2 101 b 3 101 c
1 102 e 2 102 f 3 102 g
1 103 h 2 103 i 3 103 j
-------------------------------------------------- so at the end i can add to the new table_index new category all the time or update existing category
Code Snippet ALTER PROCEDURE [dbo].[hofes_pro_stp2] AS BEGIN
update [dbo].[tb_pivot_big] set fld1 = case v.fld1 when '101' then 'a' when '102' then 'e' when '103' then 'h' when '104' then 'k' end , f1 = case v.fld1 when '101' then 'b' when '102' then 'f' when '103' then 'i' when '104' then 'l' end , f11 = case v.fld1 when '101' then 'c' when '102' then 'g' when '103' then 'j' when '104' then 'm' end , fld2 = case v.fld2 when '101' then 'a' when '102' then 'e' when '103' then 'h' when '104' then 'k' end , f2 = case v.fld2 when '101' then 'b' when '102' then 'f' when '103' then 'i' when '104' then 'l' end , f22 = case v.fld2 when '101' then 'c' when '102' then 'g' when '103' then 'j' when '104' then 'm' end , fld3 = case v.fld3 when '101' then 'a' when '102' then 'e' when '103' then 'h' when '104' then 'k' end , f3 = case v.fld3 when '101' then 'b' when '102' then 'f' when '103' then 'i' when '104' then 'l' end , f33 = case v.fld3 when '101' then 'c' when '102' then 'g' when '103' then 'j' when '104' then 'm' end , fld4 = case v.fld4 when '101' then 'a' when '102' then 'e' when '103' then 'h' when '104' then 'k' end , f4 = case v.fld4 when '101' then 'b' when '102' then 'f' when '103' then 'i' when '104' then 'l' end , f44 = case v.fld4 when '101' then 'c' when '102' then 'g' when '103' then 'j' when '104' then 'm' end ,
from [dbo].[tb_pivot_big] t inner join (select * from [dbo].[tb_pivot_big] where val_orginal = 1) v on t.empID = v.empID where t.val_orginal = 2
We are planning to add a new attribute to one of our tables to speed updata access. Once the attribute is added, we will need to populatethat attribute for each of the records in the table.Since the table in question is very large, the update statement istaking a considerable amount of time. From reading through old postsand Books Online, it looks like one of the big things slowing down theupdate is writing to the transaction log.I have found mention to "truncate log on checkpoint" and using "SETROWCOUNT" to limit the number of rows updated at once. Or "dumptransaction databaseName with No_Log".Does anyone have any opinions on these tactics? Please let me know ifyou want more information about the situation in order to provide ananswer!
Background: I have a table (named achtransactions) that has 618,423 rows. I have the ID field set as the Primary Key.
Problem: Usually when doing a single, simple update (ie: UPDATE achtransactions SET transstatus = 'APPROVED' where id = 123456) it would take less than a second.
All of a sudden, beginning today, the same UPDATE statement is taking about 20 seconds on average. Other smaller tables (~100 rows) update instantly, just appears to be this one table.
Any ideas on what I should look at to find the problem?
I'm having a problem with an update operation in a stored procedure. Itruns so slowly that it is unusable, unless I comment a part out in whichcase it is very fast. However, I need the whole thing :). I have atable of email addresses of people who want to get invited to parties.Each row contains information like email address, city, state, country,and preferences for what types of events are of interest.The primary key is an EMAILID, and has a unique constraint on the emailfield. The stored procedure receives the field data as arguments, andinserts the record if the email address passed is not in the database.This works perfectly. However, if the stored procedure is called for anemail address that already exists, it updates the existing row insteadof doing an insert. This way I can build a web page that lets peoplemodify their preferences, opt in and out of the list and so on.If I am doing an update, the stored procedure runs SUPER SLOW (and thepage times out) unless I comment out the part of the update statementfor city, state, country and zipcode. However, I really need to be ableto update this!My database has 29 million rows.Thank you for telling me anything about how I can speed up this update!Here is the SQL statement to run the stored procedure:declare @now datetime;set @now = GetUTCDate();EXEC usp_EMAIL_Subscribe @Email='dberman@sen.us', @OptOutDate=@now,@Opt_GenInterest=1, @Opt_DatePeople=0, @Opt_NewFriends=1,@Opt_OldFriends=0, @Opt_Business=1, @Opt_Couples=0, @OptOut=0,@Opt_Events=0, @City='Boston', @State='MA', @ZCode='02215',@Country='United States'Here is the stored procedure:SET QUOTED_IDENTIFIER ONGOSET ANSI_NULLS ONGOALTER PROCEDURE [usp_EMAIL_Subscribe](@Email [varchar](50),@Opt_GenInterest [tinyint],@Opt_DatePeople [tinyint],@Opt_NewFriends [tinyint],@Opt_OldFriends [tinyint],@Opt_Business [tinyint],@Opt_Couples [tinyint],@OptOut [tinyint],@OptOutDate datetime,@Opt_Events [tinyint],@City [varchar](30), @State [varchar](20), @ZCode [varchar](10),@Country [varchar](20))ASBEGINdeclare @EmailID intset @EmailID = NULL-- Get the EmailID matching the provided email addressset @EmailID = (select EmailID from v_SENWEB_EMAIL_SUBSCRIBERS whereEmailAddress = @Email)-- If the address is new, insert the address and settings. Otherwise,UPDATE existing email profileif @EmailID is null or @EmailID = -1BeginINSERT INTO v_SENWEB_Email_Subscribers(EmailAddress, OptInDate, OptedInBy, City, StateProvinceUS, Country,ZipCode,GeneralInterest, MeetDate, MeetFriends, KeepInTouch, MeetContacts,MeetOtherCouples, MeetAtEvents)VALUES(@Email, GetUTCDate(), 'Subscriber', @City, @State, @Country, @ZCode,@Opt_GenInterest, @Opt_DatePeople,@Opt_NewFriends, @Opt_OldFriends, @Opt_Business, @Opt_Couples,@Opt_Events)EndElseBEGINUPDATE v_SENWEB_EMAIL_SUBSCRIBERSSET--City = @City,--StateProvinceUS = @State,--Country = @Country,--ZipCode = @ZCode,GeneralInterest = @Opt_GenInterest,MeetDate = @Opt_DatePeople,MeetFriends = @Opt_NewFriends,KeepInTouch = @Opt_OldFriends,MeetContacts = @Opt_Business,MeetOtherCouples = @Opt_Couples,MeetAtEvents = @Opt_Events,OptedOut = @OptOut,OptOutDate = CASEWHEN(@OptOut = 1)THEN @OptOutDateWHEN(@OptOut = 0)THEN 0ENDWHERE EmailID = @EmailIDENDreturn @@ErrorENDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGOFinally, here is the database schema for the table courtesy ofenterprise manager:CREATE TABLE [dbo].[EMAIL_SUBSCRIBERS] ([EmailID] [int] IDENTITY (1, 1) NOT NULL ,[EmailAddress] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[OptinDate] [smalldatetime] NULL ,[OptedinBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[FirstName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[MiddleName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[LastName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[JobTitle] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CompanyName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[WorkPhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[HomePhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[AddressLine1] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine2] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine3] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[City] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[StateProvinceUS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[StateProvinceOther] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CI_AS NULL ,[Country] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[ZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[SubZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[GeneralInterest] [tinyint] NULL ,[MeetDate] [tinyint] NULL ,[MeetFriends] [tinyint] NULL ,[KeepInTouch] [tinyint] NULL ,[MeetContacts] [tinyint] NULL ,[MeetOtherCouples] [tinyint] NULL ,[MeetAtEvents] [tinyint] NULL ,[OptOutDate] [datetime] NULL ,[OptedOut] [tinyint] NOT NULL ,[WhenLastMailed] [datetime] NULL) ON [PRIMARY]GOCREATE UNIQUE CLUSTERED INDEX [IX_EMAIL_SUBSCRIBERS_ADDR] ON[dbo].[EMAIL_SUBSCRIBERS]([EmailAddress]) WITH FILLFACTOR = 90 ON[PRIMARY]GOALTER TABLE [dbo].[EMAIL_SUBSCRIBERS] WITH NOCHECK ADDCONSTRAINT [DF_EMAIL_SUBSCRIBERS_OptedOut] DEFAULT (0) FOR [OptedOut],CONSTRAINT [DF_EMAIL_SUBSCRIBERS_WhenLastMailed] DEFAULT (null) FOR[WhenLastMailed],CONSTRAINT [PK_EMAIL_SUBSCRIBERS] PRIMARY KEY NONCLUSTERED([EmailID]) WITH FILLFACTOR = 90 ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_WhenLastMailed] ON[dbo].[EMAIL_SUBSCRIBERS]([WhenLastMailed] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptOutDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptOutDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptInDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptinDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_ZipCode] ON[dbo].[EMAIL_SUBSCRIBERS]([ZipCode]) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_STATEPROVINCEUS] ON[dbo].[EMAIL_SUBSCRIBERS]([StateProvinceUS]) ON [PRIMARY]GOMeet people for friendship, contacts,or romance using free instant messaging software! See a picture youlike? Click once for a private conversation with that person!<a href="http://www.sen.us"><imgsrc="http://www.sen.us/mirror/SENLogo_62_31.jpg"></a>*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have several data bases on a server (SQL Server 2000 only, no web server installed) and lately, as the company keeps gowing, my users complain saying the server gets slow, (this dbs are well designed and recieve optimizations and integrity checks, etc) because of this, Im thinking about getting a new server to repleace my old ProLiant ML 330 which was bought 4 years ago but Im concerned about what server arquitecture or characteristic can help me best to improve response performance, is it HD speed? Processor speed? or more Ram? I want to make a good decision, so I´d really appreciate your help...
I have the following SQL, which works but I think it can be done simplier. I seem to have to group it by multiple columns, but I am sure there must be a way of grouping the results by a single column. Any Ideas?
Code:
SELECT count(order_items.order_id) as treenum, orders.order_id, orders.order_date, orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr FROM orders, order_items WHERE orders.order_id = order_items.order_id GROUP by orders.order_id, orders.order_date , orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr ORDER BY orders.order_id DESC
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed? .
BEGIN TRANSACTION pTrans BEGIN INSERT INTO T1 (fields) SELECT (fields) FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2 SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime FROM T2 WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0 BEGIN rollback transaction pTrans return(-1) END ELSE BEGIN commit transaction pTrans END END
I have a pretty good db server with four CPUs, it has not any other loads on it, but the following query takes 4ms to return. I use the syscolumns this way quite often, I am not sure why it takes it that long to return, any idea? select 'master',id,colid,name,xtype,length,xprec,xscale,status from [ablestatic].[dbo].syscolumns where id=(select id from [ablestatic].[dbo].sysobjects where name='link_data_ezregs')
hi, have a nice day to all , can some expert here point out what are the do and don't when we writing query ? ie:should we create many view as handler to get data? when should we use store procedure and trigger ? and so on
i seeking the way to improve my sql skill, can someone suggest some reference site or material or book as well ?
I wrote the following function a few years ago - before I learned about SQL's PATINDEX function. It might be possible to check for a valid email address syntax with a single PATINDEX string which could replace the entire body of hte function below.
Is anyone is interested in taking a crack at it?
Signed... lazy Sam
CREATE FUNCTION dbo.EmailIsValid (@Email varchar (100)) /* RETURN 1 if @Email contains a valid email address syntax, ELSE RETURN 0 */
RETURNS BIT AS BEGIN DECLARE @atpos int, @dotpos int
SET @Email = LTRIM(RTRIM(IsNull(@Email, ''))) -- remove leading and trailing blanks
IF LEN(@Email) = 0 RETURN(0) -- nothing to validate
SET @atpos = charindex('@',@Email) -- position of first (hopefully only) @
IF @atpos <= 1 OR @atpos = LEN(@Email) RETURN(0) -- @ is neither 1st or last or missing
IF CHARINDEX('@', @email, @atpos+1) > 0 RETURN(0) -- Two @s are illegal
IF CHARINDEX(' ',@Email) > 0 RETURN(0) -- Embedded blanks are illegal
SET @dotpos = CHARINDEX('.',REVERSE(@Email)) -- location (from rear) of last dot
IF (@dotpos < 3) or (@dotpos > 4) or (LEN(@Email) - @dotpos) < @atpos RETURN (0) -- dot / 2 or 3 char, after @
I have this SQL query that can take too long time, up to 1 minute if table contains over 1 million rows. And if the system is very active while executing this query it can cause more delays I guess.
select distinct 'CONV 1' as Conveyour, info as Error, (select top 1 substring(timecreated, 0, 7) from log b where a.info = b.info order by timecreated asc) as Date, (select count(*) from log b where b.info = a.info) as 'Times occured' from log a where loggroup = 'CSCNV' and logtype = 4
The table name is LOG, and I retrieve 4 columns: Conveyour, Error, Date and Times occured. The point of the subqueries is to count all distinct post and to retrieve the date of the first time the pst was logged. Also, a first and last date could be specified but is left out here.
Does anyone knows how I can improve this SQL query?
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed? .
BEGIN TRANSACTION pTrans BEGIN INSERT INTO T1 (fields) SELECT (fields) FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2 SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime FROM T2 WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0 BEGIN rollback transaction pTrans return(-1) END ELSE BEGIN commit transaction pTrans END END
Hi, We have a poorly performing SQL 2000 db. i have just defragged ( the HD, not indexes, these are done daily via SQL Agent) the data files of our server and have not found any improvement in response. I have now got into using SQL profiler to analyse the server performance. in the results that the trace is returning there are some huge (REALLY BIG) values for the duration and cpu values but these rows have no textdata value returned (ie it is null)
why is this? for these rows, the reads and writes columns are also high.
if these rows are what is taking the cpu's time then how can i identify what the server is doing to make any changes?
any thoughts on what other values i might trace or what action i can take to find the slow down cause?
in performance manager the processors (dual Xeons) are rarely dropping below 60%.
Dear Experts, I'm a DBA, Working for a Product based company. We are implementing our product for a certain client of huge OLTP. our reports team is facing problem (error: all the reports are timed out).though the queries are written properly, Each query is taking some minutes of time. I've given the command DBCC DROPCLEANBUFFERS. the time immediately dropped to 10 sec.
now my question is : please suggest me the DBCC commands or any DBA related commands to improve the performance of the application for my reports team.
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help. do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
Hey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,
I would like to know how can I change a "not in" clause with the same results in a SQL sentence in order to improve the performance of my SQL sentence.
I'm facing a performance issue with the following query... The Output of the following Query is 184 Records and it takes 2 to 3 secs to execute the query.
SELECT DISTINCT Column1 FROM Table1 (NOLOCK) WHERE Column1 NOT IN
(SELECT T1.Column1 FROM Table1 T1(NOLOCK) JOIN Table2 T2 (NOLOCK)
ON T2.Column2 = T1.Column2 WHERE T2.Column3= <Value>)
Data Info.
No of records in Table1 --> 1377366
No. of distinct records of Column1 in Table1 --> 33240
Is there any way the above query can be rewritten to improve the performance, which should take less than 1 sec... (I'm using DISTINCT because there are Duplicate records of Column1 inTable1 )
Any of your help in this regard will be greately appreciated.
I am not an expert in either SSIS or VFP technology but know enough to get my way round. One anomaly I did discover I thought was worth sharing for all those concerned with getting large amounts of data out of VFP in as short a time as possible. When you search for performance tips in relation to SSIS the advice is to never use select table or view from data access mode list in ole db source as this effectively translates to select * from table and I've never come across anything to contradict this €“ well I am and let me explain why:
When you use SQL command as data access mode in ole db source (where ole db source is foxpro dbc) and you write out select column1, column 2 etc etc from table a etc etc and then connect that to a destination (in my case ole db destination) the SSIS task spends ages stuck on Pre-execute before anything happens (the bigger the fox tbl the longer the wait). What is happening behind scenes is that the foxpro engine (assuming its foxpro engine and not sql engine €“ either way don€™t think it matters too much) is executing the sql command and then writing results to a tmp (temp) file on your local temp folder €“ (in my case : C:Documents and SettingsautologinLocal SettingsTemp1). These files take up gigs of space and it is only when this process is complete does the SSIS task actually finish the Pre-execute and start the data transfer process. I couldn€™t understand a) why my packages were stuck on pre-execute for such long times? and b) why were the tmp files being created and why they were soo big?
If you change from SQL command in source to Table or view and then select your table from list the SSIS task when executed kicks off immediately and doesn€™t get stuck on pre-execute nor create any tmp files €“ so you save time and disk space. The difference in time is immense and if like me you were really frustrated with poor performance when extracting from VFP now you know why.
Btw maybe this does not apply to all versions of VFP but it certainly does to v7.
in Tables i have builded index for name, but when every table have 20000 records, the sql above is very slow, is there other method to improve the query speed ?
Hi, I have database D1 which contains 5 million users and one more database D2 having 95k Users. i wanted to insert common users into new database D3 based on filter which is Phone number and is unique value. Below is the structure of my tables in D1 and D2:
Now userProfiles table contains data in string format as below: User.state AA User.City CC User.Pin 1234 User.phonenumber 987654
so iam parsing for each user using cursor and writing phone numbers into some temp table and wanted to query D2 database to verify whether this phone number exists in Alerts Table of D2 database.
can anyone please suggest on how i can go ahead with this and also help me on how to improve perfomance.