SELECT * FROM TA a WHERE a.rx=264886 and
AN= (select max(AN) FROM TA where rx=a.rx)
I have a table TA with 8+ million rows and there is clustered PK on (rx, AN) columns. The count on rx=264886 is 6000+ rows. This query takes about 1 to 2 minutes to fetch data. Can anyone suggest how to improve performance and fetch data faster?
Okay heres the problem I'm facing. I have GoDaddy as my hosting company. They only support MSSQL2000 but I just recently bought MSSQL2005 Developer Edition. And I was wondering what is the main difference between them two. I didn't find anything on Google. And the only thing is I can't find a better host. Cause I have 50GB of space and 500GB of bandwidth. And I don't want to lose that for just MSSQL2005 support cause my host doesn't/ Have that. So im stuck like chuck. Is there really any difference?
Mine Below Query takes considerable time at the time of execution. Can any one help me, what is the other way to write this query?
Declare @p_Mkt_View_Id int Set @p_Mkt_View_Id = 17
Select Distinct Customer_id From Active_Product_Cust_Dtl Where Product_Group_Code in (Select Distinct Product_Group_Code From Products Where Product_code in ( select Distinct ProductId from pit where pitid in (select pitid from marketviewdef where mktviewid = @p_Mkt_View_Id)))
Hi have this query which is taking too much time to execute..........I have tried the following options but not useful till now 1. NOLOCK 2. SET NO ROW COUNT 3. CHANGED DISK LOCATION OF TEMPDB 4. CHECKED %Processor Time 5. Checked pages.sec
Below is the query, any suggestions will be really helpful
SE IFRepository --Query --Returns count of txns whose status is not (10001 or 10002) declare @fileruntimeuid int declare @Pendingackcount int set @Pendingackcount =0 set @fileruntimeuid =0 declare @clientname varchar(256) set @clientname = NULL declare @txncnt int set @txncnt = 0 declare @FileNameClient varchar(256) set @FileNameClient = NULL
declare @StageStatus int set @StageStatus = 0 declare @StageDesc varchar(35)
declare PendingAcks cursor for select distinct fileruntimeuid from tiffileruntime WITH (NOLOCK) where filecreationdt >= convert(smalldatetime,'9-11-07') and filecreationdt <= convert(smalldatetime,'9-12-07') and statusid <> 2 --and filetypeuid in (1,8,16,17,18) --and clientuid =1205 order by fileruntimeuid --244873, 244883, 244885, 244892, 244893, 244925, 244926, 244966, 244967, 244873, 244883
Print 'File Life Cycle Viewer via Database' Print '===========================================================================================' Print 'Status FileRuntimeUID Client Name File Status File name' Print '===========================================================================================' Open PendingAcks FETCH NEXT FROM PendingAcks into @fileruntimeuid WHILE @@FETCH_STATUS = 0 BEGIN select top 1 @StageDesc = b.IFComponentDesc from TIFComponent b, TIFFIleProcessingStatus a WITH (NOLOCK) where a.IFComponentUID = b.IFComponentuid and a.fileruntimeuid = @fileruntimeuid order by a.FPROCStageStartDt desc-- a.IFComponentUID desc
select @clientname = ClientShortName from tifclientattrib where clientuid = (select clientuid from tiffileruntime where fileruntimeuid = @fileruntimeuid)
select @StageStatus = statusid, @FileNameClient= FileNameClient from tiffileruntime where fileruntimeuid = @fileruntimeuid select @txncnt = FProcTxnProcessedInTotal from tiffileprocessingstatus where fileruntimeuid = @fileruntimeuid and IFComponentUID = 5 --if @StageDesc = "" Begin @StageDesc = "------------" End print RTRIM(convert(varchar(10),@StageStatus)) + ' ' + RTRIM(convert(varchar(10),@fileruntimeuid)) + ' ' + RTRIM(@clientname) + ' ' + @StageDesc + ' ' + RTRIM(@FileNameClient) set @StageDesc = NULL FETCH NEXT FROM PendingAcks into @fileruntimeuid END
Print '===========================================================================================' close PendingAcks deallocate PendingAcks
I want to take the execution plan of some transact sql queries, I took the execution plan as text based one, that having
index scan, index seek
Remote scan, remote Update
sort order by cluases
in the above clauses what is the high performance, and how will i change to the high performance clauses by changin the query to improve the execution speed of the query
Please guide me
the execution plan is
SELECT [Inventory_Profile].[InventoryID] ,[Inventory_Profile].[Alias] ,[Inventory_Profile].[InventoryStatusID] ,[Inventory_Profile].[InventorySubTypeID] ,[Inventory_Profile].[InventoryTypeID] ,[Inventory_Profile].[AcquisitionDate] ,[Inventory_Profile].[UnitNumber] ,[Inventory_Profile].[YearOfManufacture] ,[Inventory_Profile].[Manufacturer] ,[Inventory_Profile].[Make] ,[Inventory_Profile].[Model] ,[Inventory_Profile].[SerialNumber] ,[Inventory_Profile].[UsageConditionID] ,[Inventory_Profile].[Description1] ,[Inventory_Profile].[Description2] ,[Inventory_Profile].[LocationEffectiveFromDate] ,[Inventory_Profile].[IsFlaggedForSale] ,[Inventory_Profile].[RentalPurchaseOrderNumber] ,[Inventory_Profile].[AquisitionPurchaseOrderNumber] ,[Inventory_Profile].[SortOrder] ,[Inventory_Profile].[IsSaleLeaseBack] ,[Inventory_Profile].[InterimRentReceivableUpfrontTaxModeID] ,[Inventory_Profile].[LeaseRentalReceivableUpfrontTaxModeID] ,[Inventory_Profile].[OverTermReceivableUpfrontTaxModeID] ,[TaxDepreciation_Inventory].[IsTaxDepreciationRequired] ,[TaxDepreciation_Inventory].[IsComputationPending] ,[TaxDepreciation_Inventory].[TaxDepreciationTemplateID] ,[TaxDepreciation_Inventory].[InventoryCostBasisAmount] ,[TaxDepreciation_Inventory].[DepreciationBeginDate] ,[TaxDepreciation_Inventory].[DepreciationEndDate] ,[TaxDepreciation_Inventory].[IsTaxDepreciationTerminated] ,[TaxDepreciation_Inventory].[IsStraightLineMethodUsed] ,[TaxDepreciation_Inventory].[IsLeaseTermUsedForStraightLineMethod] ,[Inventory_PTMS].[Division] ,[Inventory_PTMS].[Branch] ,[Inventory_PTMS].[SalesTaxPercent] ,[Inventory_PTMS].[SalesTaxAmount] ,[Inventory_PTMS].[IsSalesTaxIncluded] ,[Inventory_PTMS].[GLExpenseAccount] ,[Inventory_PTMS].[GLAssetAccount] ,[Inventory_PTMS].[SoftwareExclusionAmount] ,[Inventory_PTMS].[AssetCategoryCodeID] ,[Inventory_PTMS].[OwnershipCodeID] ,[Inventory_PTMS].[ManufacturingCodeID] ,[Inventory_PTMS].[ReimburseCodeID] ,[Inventory_PTMS].[BillingStatusID] ,[Inventory_PTMS].[PropertyTaxExemptionCodeID] ,[Inventory_PTMS].[UserDefinedField1] ,[Inventory_PTMS].[UserDefinedField2] ,[Inventory_PTMS].[Notes] FROM [Inventory_Profile] INNER JOIN [TaxDepreciation_Inventory] ON [Inventory_Profile].[InventoryID]=[TaxDepreciation_Inventory].[InventoryID] INNER JOIN [Inventory_PTMS] ON [Inventory_Profile].[InventoryID]=[Inventory_PTMS].[InventoryID] INNER JOIN [Inventory_Status_CnfgLocale] ON [Inventory_Profile].[InventoryStatusID] in (SELECT InventoryStatusID FROM Inventory_Status_CnfgLocale WHERE InventoryStatusName <> 'Donated' and InventoryStatusName <> 'Scrap' and InventoryStatusName <>'Write Off' and InventoryStatusName <> 'Sold')
I have 2 servers: myLocalServer (SQL2005) and myRemoteServer (SQL2000), both in the same LAN. I wish to syncronize a remote table with a local table (both share the same structure) by means of a stored procedure. The amount of rows to carry from the local to the remote table is about 20,000. The query takes more than a minute, and I would like to take down that time. Can you please help me?
myRemoteServer is declared in myLocalServer by means of a Linked Server object, and I declared a synonym called Syn_RemoteTable which represent the remote table.
First I tried a cursor, but it did not worked:
declare curLocalTable cursor local forward_only static read_only for select ID, Value from myLocalTable where UpdateTimeStamp>@LastUpdate
open curLocalTable fetch curLocalTable into @ID, @Value
while @@Fetch_Status=0 begin if exists(select ID from Syn_RemoteTable where ID=@ID) begin update Syn_RemoteTable set Value=@Value where ID=@ID end else begin insert into Syn_RemoteTable (ID, Value) values (@ID, @Value) end fetch curVentasMensuales into @ID, @Value end
close curLocalTable deallocate curLocalTable
Other way that I tried -performing equally poorly- was:
update Syn_RemoteTable set Value=T.Value from Syn_RemoteTable inner join ( select ID, Value from myLocalTable where UpdateTimeStamp>@LastUpdate ) as T on T.ID=Syn_RemoteTable.ID
insert into Syn_RemoteTable ( ID, Value ) select ID, Value from myLocalTable where UpdateTimeStamp>@LastUpdate and ID not in (select ID from Syn_RemoteTable)
Hi, I have problem running this query. It will time out for me...My database are small just about 200 members.I have a site for swaping appartments (rental). my query should lookfor matchin a triangle. Like this member A -> B->CA give his appartment to B. B gives his appartment to C and finallyC gives his appartment to ASoo my query looks for matching parameters like rooms, location, sizeandsoo on..I have one table for existing appartments and one for "whantedappartments"and 1 table called "intresse" where members can store "yes" or "no" ifthey are interessted in a appartment.I also have a table called "omrade" to store locations of interest.Hope you can helpe me with some tip soo i can run this query in a fewseconds instead of 20-30 secThanks MSELECTF.medlemsNr as medlemsNr, F.lfId AS lfId, F.ort AS ort, F.gatuadressAS gatuadress, F.gatuNr AS gatuNr, F.rum AS rum,F.storlek ASstorlek,F.hyra AS hyra, count(F.medlemsNr) As hitsFROMmedlem08 A, medlem08 B, medlem08 C, lagenhetF08 D,lagenhetO08 E, lagenhetF08 F, lagenhetO08 G, lagenhetF08 H,lagenhetO08 IWHERED.rum >= I.rumMin AND D.rum <= I.rumMax ANDD.storlek >= I.storlekMin AND D.storlek <= I.storlekMax ANDI.hyraMax = 0" & " OR D.hyra <= I.hyraMax) ANDI.balkong = '" & "" & "' OR D.balkong = I.balkong) AND(I.badkar = '" & "" & "' OR D.badkar = I.badkar) AND(I.bredband = '" & "" & "' OR D.bredband = I.bredband) AND(I.hiss = '" & "" & "' OR D.hiss = I.hiss) AND(I.spis = '" & "" & "' OR D.spis = I.spis) AND(I.brf = '" & "" & "' OR D.brf = I.brf) ANDD.postNr IN (select postNr from ONSKEMAL08 where loId=I.loId) ANDF.medlemsNr Not IN (select medlemsNr2 from INTRESSE08 wheremedlemsNr1=A.medlemsNr) ANDH.rum >= G.rumMin AND H.rum <= G.rumMax ANDH.storlek >= G.storlekMin AND H.storlek <= G.storlekMax AND(G.hyraMax = 0" & " OR H.hyra <= G.hyraMax) AND(G.balkong = '" & "" & "' OR H.balkong = G.balkong) AND(G.badkar = '" & "" & "' OR H.badkar = G.badkar) AND(G.bredband = '" & "" & "' OR H.bredband = G.bredband) AND(G.spis = '" & "" & "' OR H.spis = G.spis) AND(G.brf = '" & "" & "' OR H.brf = G.brf) ANDH.postNr IN (select postNr from ONSKEMAL08 where loId=G.loId) ANDF.rum >= E.rumMin AND F.rum <= E.rumMax ANDF.storlek >= E.storlekMin AND F.storlek <= E.storlekMax AND(E.hyraMax = 0" & " OR F.hyra <= E.hyraMax) AND(E.balkong = '" & "" & "' OR F.balkong = E.balkong) AND(E.badkar = '" & "" & "' OR F.badkar = E.badkar) AND(E.bredband = '" & "" & "' OR F.bredband = E.bredband) AND(E.hiss = '" & "" & "' OR F.hiss = E.hiss) AND(E.spis = '" & "" & "' OR F.spis = E.spis) AND(E.brf = '" & "" & "' OR F.brf = E.brf) ANDF.postNr IN (select postNr from ONSKEMAL08 where loId=E.loId) ANDA.medlemsNr=D.medlemsNr AND A.medlemsNr=E.medlemsNr ANDB.medlemsNr<>A.medlemsNr AND C.medlemsNr<>A.medlemsNr ANDB.medlemsNr<>C.medlemsNr ANDB.sparr<>1 AND C.sparr<>1 ANDA.typ=11 AND A.medlemsNr=" & session("medlemsNr") & " ANDB.medlemsNr=F.medlemsNr AND B.medlemsNr=G.medlemsNr ANDB.typ=11 AND A.triangel=1 AND B.triangel=1 AND C.triangel=1 AND " &_C.medlemsNr=H.medlemsNr AND C.medlemsNr=I.medlemsNr ANDC.typ=11 group by F.lfId, F.medlemsNr,F.ort,F.gatuadress,F.gatuNr,F.rum,F.storlek,F.hyra
I am doing sp tuning. It has several lines. SO I divided into several small queries and executed individually and check the execution plans. In one small query, I found table scan is happening. That query is basically retrieving all columns from a table but the table doesn't have any pk or Indexes. So is it better to create non-clustered index to remove table sca.
Create Index ind_Item_Name on Item(I_Name); Create Index ind_Item_BC on Item(I_BC); Create Index ind_Item_Company on Item(I_Company);
It is populated with 50 000 records. Searching on indexed columns is fast, but I've run into the following problem: I need to get all distinct companies in the table. I've tried with these two queries, but they both are very slow!
1. "select I_Company from item group by I_Company " - This one takes 19 seconds
2. "select distinct(I_Company) from item" -This one takes 29 secons
When I ran them through the SQL Management Studio and checked the performance plan, I saw that the second one doesn't use index at all ! So I focused on the first... The first one used index (it took it 15% of the time), but then it ran the "stream aggregate" which took 85% of the time ! Actully 15% of 19 seconds - about 2 seconds is pretty much enough for me. But it looks that aggregate function is run for nothing! So is it possible to force the query engine of the SSCE not to run it, since there is actually no aggregate functions in my select clause? According to SQL CE Books online: Group By
"Specifies the groups (equivalence classes) that output rows are to be placed in. If aggregate functions are included in the SELECT clause <select list>, the GROUP BY clause calculates a summary value for each group." It seems the aggregate is run every time, not only when there is an aggregate function.
I am doing performance tuning of SP/Query in Dev-Test environment.
I found that SQL Server caches plan between successive executions.
So if I test/execute SP 10 times, after 1st or 2nd execution, SQL server will pull-up plan-info from CACHE...Not from SQL SERVER Or Database...
Means i am not getting correct answer...
I found this 2 commands:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But they say that executing above command might interfere/bother other people executing other query/sp on this server.
They also say that: Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. This can cause a sudden, temporary decrease in query performance.
Part of query was using Dynamic-SQL executed with EXEC command.
I replaced that with SP_EXECUTESQL.
How can I start testing of each SP-run with Fresh/Blank CACHE ?
I'm having trouble with a multi-table JOIN statement with more than one JOIN statement.
For each order, I need to return the following: CarsID, CarModelName, MakeID, OrderDate, ProductName, Total ordered the Car Category.
The carid (primary key) and carmodelname belong to the Cars table. The makeid and orderdate belong to the OrderDetails table. The productname and carcategory belong to the Product table.
The number of rows returned should be the same as the number of rows in OrderDetails.
A piece of software I wrote starting timing out on a query that left outer joins a table to a view. Both the table and view have approximately the same number of rows (about 170000).
The table has 2 very similar columns, one is a varchar(1) and another is varchar(100). Neither are included in any index and beyond the size difference, the columns have the same properties. One of the employees here uses the varchar(1) column (called miscsearch) to tag large sets of rows to perform some action on. In this case, he had set 9000 rows miscsearch value to "g". The query then should join the table and view for all rows where miscsearch is set to g in the table. This query takes at least 20 minutes to run (I stopped it at this point).
If I remove the "where" clause and join all rows in the two tables, the query completes in about 20 seconds. If set the varchar(100) column (called descrip) to "g" for the same rows set via miscsearch, the query completes in about 20 seconds.
If I force the join type to a hash join, the query completes using miscsearch in about 30 seconds.
So, this works:
SELECT di.File_No, prevPlacements, balance,'NOT PLACED' as status FROM Info di LEFT OUTER HASH JOIN View_PP pp ON di.ram_file_no = pp.file_no WHERE miscsearch = 'g' ORDER BY balance DESC
and this works:
SELECT di.File_No, prevPlacements, balance,'NOT PLACED' as status FROM Info di LEFT OUTER JOIN View_PP pp ON di.ram_file_no = pp.file_no WHERE descrip = 'g' ORDER BY balance DESC
But this does't:
SELECT di.File_No, prevPlacements, balance,'NOT PLACED' as status FROM Info di LEFT OUTER JOIN View_PP pp ON di.ram_file_no = pp.file_no WHERE miscsearch = 'g' ORDER BY balance DESC
What should I be looking for here to understand why this is happening?
I have 2 tables, I will add sample data to them to help me explain...Table1(Fields: A, B)=====1,One2,Two3,ThreeTable2(Fields: A,B)=====2,deux9,neufI want to create a query that will only return data so long as the key(Field A) is on both tables, if not, return nothing. How can I dothis? I am thnking about using a 'JOIN' but not sure how to implementit...i.e: 2 would return data- but 9 would not...any help would be appreciated.
Hi May I ask how to export MSSQL2000 DB to *.sql with all existing data? As I built the DB on local machine, the remote machine only provide SQL Server Web Admin which has Query Analyzer only. I can export a *.sql file, but there are no data. Thanks for help
Hi! I am a newbie using MSQL so please patience. I compact my database with DATABASE SHRINK(DB,10) BACKUP LOG data WITH TRUNCATE DATABASE SHRINK(DB,10)
When I see the properties of my DB I see something strange 1- My log always has 1 mb of space 2- Database properties show me this information:
Size: 21639.23MB Space Available: 782.23
Backup Last database backup: 9/28/2007 1:44am Last transaction log backup 6/9/2007 12:47 pm
I am so afraid because this says that I just have 782.23MB but my disk has 30GB yet.
Ok, so this 3rd party application we have exhibits some interesting behavior; when you amend any table through it's interface it drops and re-adds the object; meaning all permissions to the object are also removed. (Just take my word for it when I say that the table changes have to be done through this app).
Now, I've built a little application that hooks into this using an SQL Server authenticated user account with SELECT only permission on particular tables. Currently the app uses sprocs to access data. However, when changes were made to the schema last week; the application, obviously, received permission errors and ground to a nice halt.
I know that in 2005 we have the lovely EXECUTE AS statement; but I'm running a 65compaability database here and don't have that functionality.
Any ideas on how I can sort this mess out?
Hope I explained this well enough, let me know if you need any more info
Hi, I have a bunch of data from tables on MS SQL 2000 and i want to transfer this data to my new database running on MS SQL 2005. How do i perform bcp on this? thanks :D
Hi Group,I developed a intranet site using MSSQL7/win2000 some time ago.The target environment used MSSQL2000/8.We were (almost painlessly) able to import the db-scheme and data from 7to 8. (Bravo MSSQL)Now I need to do some upgrading on the application and I would like tohave a copy of the database from MSSQL2000/8 to MSSQL7.Is that also possible?Or should I download Microsoft SQL Server Express and use that insteadof my MSSQL7? Is it better?I hope I can get the relationsheet too (that one with Foreign Keysmapped in a nice graphical way).Any advise highly appreciated.I am good with Postgresql, but my MSSQL skills leave a lot to bedesired. :-/For an outsider like me the many versions and OS's are quite confusing.Do I need special commands on MSSQL2000/8 to create a MSSQL7 compatibleexport?Thanks in advance!Regards,Erwin Moller
Hi, i am having problem to install sql2000 server, the steps i did are:1. Local machine -> next2. Create a new instance of SQL server -> next3. type name and company -> next4. accept -> next5. Server and client tools -> next6. Default -> next7. Typical -> next8. use the same account ... and Use the Local system account -> next9. Mixed mode , passward -> nextwhatever options i choose, the installation program of sql will exitafter step9. just don't know what the problem is. please help thanks:)
Hi.. Never used MSSQL 2000 db. I have this website that uses mssql2000. I want to export it and convert db in mysql. Please help me understand how i can do that. I have all login info..
I have mssql2000 running on a Windows 2003 server and now have a requirement to run an Oracle 10g database as well. Is it possible to run both mssql2000 and oracle 10g on the same server without running into any conflicts or will the two programs cause errors with each other?
Anyone have any experience with this? Oracle says it's technically possible but the tech had never seen it done.
Hi, I'm executing a stored procedure in my local LAN which executes another one in a loop and I update a Table. The number of loops is about 6300. This operation takes about 25 seconds in my local LAN. Then I try to execute though in a VPN which has an upload speed of 256 kbps. I open query analyser connect to the remote server which is must faster than mine and I just write exec mystoredprocname in order to execute the procedure. The performance is very very slow. In 7 minutes 180 loops are completed out or 6300. I really cannot understand this. What is the reason of such slow perfomance?? My ADSL model displays no activity when the procedure is executed. I just use the PRINT method in MSSQL in order to display the progress of the operation. I tried to comment it out but with no difference. I also use SET NOCOUNT ON in order not to display the update results.
Can someone explain me the cause for this? Are there some tricks in order to improve the performance when a slow connection is used like a ADSL with a static IP? It seems that something wrong is happening here.
Hello!Does anybody know whether mssql2000 and emc mirrorvew _certified_ forjoint work?(Mirrorview is a fc-based remote mirroring solution)I mean is it supported from the MS point of view to put mssqldatafiles on emc mirrorview volumes?For example Oracle corp. has "Oracle Compatible Remote MirroringTechnologies" certification.But what about MS?
(MSSQL2000) I have read the transaction/locking sections in theMS-help, online and several books. What I want to understand is thetransaction behavior in single statements [not a BEGIN TRANSACTIONStatement1, Statement2... COMMIT].If I have a Table: "Letters" with 1 column "L" and the table presentlyhas rows{A,B,C,D}Case 1 (Insert):First start transaction T1 "SELECT * FROM Letters"Next start transaction T2 [separate connection] "INSERT INTO LettersVALUES( 'Z' )"Is it possible that T2 ends before T1 and the select returns{A,B,C,D,Z}Is it possible that T1 ends before T2 and the select returns{A,B,C,D} [No 'Z']Is this a race condition and I need to use a TABLOCK or TABLOCKX;and are TABLOCK/TABLOCKX only hints? I mean does the use of TABLOCKguarantee a lock on the table? Do I need to use 'SET TRANSACTIONISOLATION LEVEL SERIALIZABLE' and if I use 'TRANSACTION ISOLATIONLEVEL' is there a means of telling the system which tables I willtouch so that I can avoid a deadlock [upfront tell the system whattables I need to lock so there is not a race later]?Case 2 (Delete basically the same):First start transaction T1 "SELECT * FROM Letters"Next start transaction T2 "DELETE FROM Letters L = 'D'"Is it possible that T2 ends before T1 and the select returns {A,B,C}[No 'D']Is it possible that T1 ends before T2 and the select returns{A,B,C,D}Case 3 (Update basically the same):First start transaction T1 "SELECT * FROM Letters"Next start transaction T2 "UPDATE Letters SET L = 'Z'"Is it possible that T2 ends before T1 and the select returns{A,B,Z,Z} [Some letters were seen to become 'Z']Is it possible that T1 ends before T2 and the select returns{A,B,C,D}
Here I got some problems with my application. (ASP & English Version SQL Server 2000)
As we are using English MSSQL Server 2000, we got some new functions and we have to facilitate support of Chinese characters in the DB. I have set the collation for those Chinese fields already and those queries or Stored Procs for Chinese are working fine, ONLY if I execute them in Enterprise Manager. Chinese characters can be displayed in the relevant tables.
However here comes the big problem and I got really frustrated. As we will provide user interface in ASP pages, we 'll let users to insert the information which will be sent to the DB. If there's Chinese characters in the query string, the Chinese characters added in the DB would be garbled.
e.g. EXECUTE proc_TestChinese 'XYZ', 'test123' (assume XYZ be those Chinese words)
I am wondering if there's any way I can solve this problem. Should I add special handling for these Chinese words? I have set the ASP pages in UTF-8 or Big5 encoding but it doesn't help. Hope you experts can show me the way out of the mess. Thanks in advance!
I have a datadabase with 1 datafile from 60Gb. Is it a good thing(preformance) to split up this datafile in smaller datafiles from 6Gb each? I don't have separete diskslices so a can't spread my datafiles on my disks but i only need to know if a datafile from 60Gb sin't too big for MSSQL2000.
Sub Page_Load(Src As Object, E As EventArgs) Dim MyConnection As SqlConnection Dim da AS sqlDataAdaptor Dim ds As DataSet MyConnection = New SqlConnection("DSN=***;UID=***;PWD=***") da = new sqlDataAdaptor("select * from city", MyConnection) Try MyConnection.open() Response.write "Connection Success " & MyConnection.Database & "<br>" catch sx AS sqlException Response.write "Connection successful: <br>" End Try
ds=new Dataset() da.Fill(ds, "City")
With DataGrid1 .DataSource = ds.Tables("city").DefaultView .DataBind() End With
I was using MSSQL7 for a long period. I upsized to MSSQL7 from Access some years ago. Without any particular reason when writing code in stored procedures, when I wanred to select some records having a bit column to true I used the syntax bitcolumn=-1 (and not bitcolumn=1). This behavior was used in Access. Everything worked fine. Then I moved to MSSQL2000 and by restoring the MSSQL7 database I had no problem. However, in order to use some features of MSSQL2000 I had to run the sp sp_dbcmptlevel <database>, 80 After that the condition bitcolumn=-1 didn't work. Can anyone verify this behavior, since I have to make dozens of changes in my stored procedures and triggers???
I can not get a multiple row resultset to display or even get sent to my client application running on coldfusion. What is the problem with the code? How do i display and return a resultset to my coldfusion client application? SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO
ALTER PROCEDURE dbo.nTransaction (@pAccountNo varchar(30), @N int, @ntransCursor CURSOR VARYING OUTPUT, @nValue varchar(4000) OUTPUT) AS
set @N = 0
SET ROWCOUNT @N
SET @ntransCursor = CURSOR FOR -- FORWARD_ONLY STATIC SELECT CONVERT(varchar,Eh.EntryID), Eh.EntryReference,E.AccountNo, E.Narrative, E.Amount, Eh.EntryDate FROM entryheaders as Eh cross join entrys as E WHERE Eh.EntrySerial = E.EntrySerial and AccountNo = @pAccountNo ORDER BY Eh.EntryID DESC SET ROWCOUNT 0
OPEN @ntransCursor
WHILE (@@FETCH_STATUS = 0) BEGIN FETCH NEXT FROM @ntransCursor into @nValue
END CLOSE @ntransCursor DEALLOCATE @ntransCursor;
GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO
Hi AllI need to aggregate a query to produce the following:Workplace AvgM100 4.7M120 3.45Which would be a normal aggregate:SELECT Workplace, Avg(VALUE)FROM PRODGROUP BY WorkplaceHowever I need the average to only be based on the most recent 20results from each of the Workplace groups.I've never had to do something like this before so can't think of anyway to only take off the most recent 20 for each group (ordered byDate). It doesn't really matter if there were 25 spread across 2 daysI would just cut the list at 20 VALUEs as there is no time componentinvloved.Is there any way to do a sub-query that uses select top 20 ... foreach group that could then be aggregated?I would prefer to do it through a select statement rather than havingto use a stored procedure using and variables, etc which I can do. Thetable is not huge but is growing rapidly so I'm concerned thatanything using dyamic SQL or similar would be become painfully as thenumber of groups grows to 5,000 or more.If anyone has any ideas they would be greatly appreciated.Thanks in advance,Bevan