Extremely Slow TSQL
Apr 23, 2008
Hey Guys
I have a huge speed issue on one or two of my SQL Tables. I have included the basic design below.
Structure
Id
ParentId
Name
Group
Id
ParentId
Name
Weight
Products
Id
Name
StructureProducts
StructureId
ProductId
Imported
StructureGroups
StructureId
GroupId
GroupProducts
GroupId
ProductId
AnswerDates
Id
AssessmentDate
Scores <-- This table is the slow table
AnswerDateId
StructureId
GroupId (nullable)
ProductId (nullable)
Score >= 0 && <= 100
Ok, Structures are the start of everything. Structures, have children. If a group/product is Linked to a parent or child structure then that group/product is visible along the structure tree flow path. Groups, like structure have children. And also like structures, if a group is given a product, then that product is visible through the structure tree flow path.
Example:
Earth [Structure]
- Asia [Structure]
--- China [Structure]
--- Japan [Structure]
----- Computer Stuff [Group]
------- Desktops [Group]
------- Servers [Group]
------- Laptops [Group]
--------- HP [Product]
--------- Dell [Product]
--------- Fujitsu [Product]
- Europe [Structure]
--- Germany [Structure]
----- Berlin [Structure]
--- Italy [Structure]
----- Rome [Structure]
----- Venice [Structure]
- America [Structure]
--- United States of America [Structure]
----- New York [Structure]
------- Computer Stuff [Group]
--------- Desktops [Group]
--------- Servers [Group]
--------- Laptops [Group]
----------- HP [Product]
----------- Dell [Product]
------- Home Stuff [Group]
--------- Kitchen Stuff [Group]
--------- bedroom Stuff [Group]
----- Washington [Structure]
------- Computer Stuff [Group]
--------- Desktops [Group]
--------- Servers [Group]
--------- Laptops [Group]
----------- HP [Product]
----------- Dell [Product]
----------- Acer [Product]
------- Home Stuff [Group]
--------- Kitchen Stuff [Group]
--------- bedroom Stuff [Group]
----- Chicago [Structure]
------- Computer Stuff [Group]
--------- Desktops [Group]
--------- Servers [Group]
--------- Laptops [Group]
------- Home Stuff [Group]
--------- Kitchen Stuff [Group]
--------- bedroom Stuff [Group]
- Africa [Structure]
--- South Africa [Structure]
----- Johannesburg [Structure]
------- Computer Stuff [Group]
--------- Desktops [Group]
--------- Servers [Group]
--------- Laptops [Group]
----------- Acer [Product]
------- Home Stuff [Group]
--------- Kitchen Stuff [Group]
--------- bedroom Stuff [Group]
----- Durban [Structure]
----- Capte Town [Structure]
- Australasia [Structure]
So the initial steps that happen (with regards to Scoring) are as follows.
1. Insert root score (which would be for a structure, a group, an answer date and either a product or no product
2. Score the next group up along the treeview, using the scores for the groups at the same level as the original group (0 score if no score exists).
3. Continue this till GroupTree is at root (parentid == null)
4. Using the next structure up along the treeview, repeat steps 2 & 3.
5. Continue steps 4 until Structuree is at root (parentid == null)
Example
Scoring a product for Johannesburg Acer Laptop would go as follows
1. Initial score for [Acer] product against Group [Laptop] for Johannesburg.
2. Calculate Score for all products (productid = null) against Laptop for Johannesburg
3. Calculate Score for [Acer] product against Group [Computer Stuff] for Johannesburg
4. Calculate Score for all products against Group [computer Stuff] for Johannesburg
5. Calculate score for [Acer] product against all root groups for Johannesburg
5.1. Group [Comptuer Stuff] and [Home Stuff]
6. Calculate score for all products against all root groups for Johannesburg
6.1. Group [Comptuer Stuff] and [Home Stuff]
7. Calculate score for [Acer] Product against Group Laptop for South Africa
8. Calculate Score for all products (productid = null) against Laptop for South Africa
9. Calculate Score for [Acer] product against Group [Computer Stuff] for South Africa
10. Calculate Score for all products against Group [computer Stuff] for South Africa
11. Calculate score for [Acer] product against all root groups for South Africa
11.1. Group [Comptuer Stuff] and [Home Stuff]
12. Calculate score for all products against all root groups for South Africa
12.1. Group [Comptuer Stuff] and [Home Stuff]
13. Calculate score for [Acer] Product against Group Laptop for Africa
14. Calculate Score for all products (productid = null) against Laptop for Africa
15. Calculate Score for [Acer] product against Group [Computer Stuff] for Africa
16. Calculate Score for all products against Group [computer Stuff] for Africa
17. Calculate score for [Acer] product against all root groups for Africa
17.1. Group [Comptuer Stuff] and [Home Stuff]
18. Calculate score for all products against all root groups for Africa
18.1. Group [Comptuer Stuff] and [Home Stuff]
etc. etc. etc...
This basicly coveres the concept behind the basic scoring methodology. Now the methodology splits into 2. The first Methodology 1, say it should do these calculations using the Exact same date as the original scored date. (Ie. if i do a score today, only scores on today will be used in the calculations). The other, Methodology 2, says that it should do the calculations on the latest available date. (Ie. If i do a score today, only scores from today and the latest before today will be used in the calculations).
Now to add another problem to this already complex process, is that each Group and each product within a structure can have either of the 2 scoring methodologies assigned to it. Also, products can only be scored against the structures and groups that they are assigned to. Ie, Acer exists in Laptop Group, in Johannesburg or South Africa or Africa, but doesnt exist in New York.
Ok, so now that i've explained briefly how this scoring works, let me get to the heart of the problem. Basicly its speed (can clearly see why), though the speed issue only comes
up in 1 Place. And that is where it has to look backwards for the latest available score for the required group, structure and product.
For this to happen i wrote a function
ALTER FUNCTION [dbo].[GetLatestAnswerDateId]
( @StructureId INT,
@GroupId INT,
@ProductId INT,
@AnswerDateId INT )
RETURNS INT
AS
BEGIN
DECLARE @Id INT
DECLARE @Date DATETIME
SELECT TOP 1 @Date = [Date]
FROM [dbo].[AnswerDate]
WHERE [Id] = ISNULL(@AnswerDateId, [Id])
ORDER BY [Date] DESC
SELECT TOP 1 @Id = ad.id--gs.[AnswerDateId]
FROM [dbo].[Scoring] gs
INNER JOIN [dbo].[AnswerDate] ad ON ad.Id = gs.AnswerDateId
WHERE [StructureId] = @StructureId
AND ISNULL([GroupId], -1) = ISNULL(@GroupId, -1)
AND ISNULL([ProductId], -1) = ISNULL(@ProductId, -1)
AND [Date] <= @Date
ORDER BY [Date] DESC
RETURN @Id
END
Now on small amounts of data (1000 rows or so) its quick, though that is due to the fact that the data is minimal, but on large amounts of data this function runs for along time. Specificly in the context of the following when there is 6 months of scoring data (100 000+ rows) to peruse.
SELECT [StructureId], [GroupId], [AnswerDateId], [ProductId], [Score]
FROM [Scoring]
WHERE AnswerDateId = GetLatestAnswerDateId([Structure], [GroupId], [ProductId], null)
AND [StructureId] = South Africa
AND [GroupId] = Computer Stuff
AND [ProductId] = Acer
Any idea's on how to make this quick? or quicker?
My Current runtime for calculating the 2500 base scores (totals 100 000+- rows) takes 15 hours. Though this is an initial calculation and is only supposed to be done once.
Also, this calculations are all correct, so my only issue itself is the speed of the entire process.
Thanks In Adance
Jonathan
WARNING: Running on cold coffee!
View 7 Replies
ADVERTISEMENT
May 13, 2004
Hi guys.
I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly).
When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!!
When i start the job, the DTS package is running Very Very Slow!!!!
sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem?
I Need Help Quick!!!
Thank You
Gil
View 5 Replies
View Related
Sep 16, 2005
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
View 5 Replies
View Related
Sep 17, 2007
Hello! I have a query that join five tables and returns around 45000 rows that takes no more than a minute to execute, in management studio, on a SQL Server 2005, 2CPU 32 bit(dual core), 4GB and RAID5 disk system. The O/S is Windows 2003 sp2 Standard Edition.
When the same query is executed in SSRS2005, with some drilldown and summary of drilldown levels, it never stops to execute.
Looking at the hardware in the performance monitor reveals nothing strange except that % CPU-time is around 40 percent. Memory resources over 2 GB are available all the time.
Any suggestions is appreciated!
Any problems with SQL Server 2005 source database running on SQL Server 2000 compatibility level?
Thomas Ivarsson
View 1 Replies
View Related
Feb 22, 2007
On a dual opteron server, viewing the properties of a task inside a data flow was immediate up until this morning.
We just installed SP2, rebooted, and now you right-click on any task and it takes between 30 secs and 5 minutes before the context menu pops-up.
The server is 99% idle, most of the 16 Gigs of memory are idle as well.
View 1 Replies
View Related
Nov 17, 2006
I have a package that reads the contents of 11 Excel files into various tables. Opening this package in the designer, or with DTExecUI is extremely slow. In both cases when I open the package is takes over 10 minutes to do anything. Visual Studio gives the "Visual Studio is Busy" message for 10 minutes and DTExecUI just hangs. DTExecUI actually hangs twice, once when opening the package and a second time when clicking "Execute" (totalling over 20 minutes). It seems like no matter how I try to run the package it will always hang for 10 minutes before running with no status message or anything. Once it runs it completes quickly with no errors.
The various tasks in the package are fairly simple, most being Source > Data Conversion > Destination.
Any suggestions?
View 4 Replies
View Related
Jan 26, 2007
Hallo
i'm experiencing an extremely slow connection from a WXPP Sp2 client to a MSSQL2000 running on a W2k server. The client is running a VB6 application that connect with Windows authentication: every form requesting data opens with a long delay at the first launch; next attempts run normally fast.
In the same LAN there are some others identical clients, all running fine.
Every other network activity from that client is ok.
Where should i start to investigate from?
View 3 Replies
View Related
Apr 7, 2006
Hi folks,
I'm working within VS2005/Business Intel studio environment. I've got one master package, which loads about 18 sub-packages as tasks.
After openin the master package (and waiting 5-10 minutes for the packages to open and validate), maneuvering within the IDE is nearly impossible, it is so slow. Context menus can take 30 seconds to open. Certian operations, like closing a window, seem to hang the environment.
Does anyone have any feedback about this kind of ide performance problem?
Thanks, Scott
View 10 Replies
View Related
Mar 21, 2007
I had installed SQL 2005 in my server (very high spec server).
but i wonder why every time i launch the management studio, it tooks forever to load... about 5 minutes.
When i right click on a table and click on view properties, it can takes up to 5 minutes to display the detail.
Is there any setting / configuration that i configured wrongly?
View 4 Replies
View Related
Mar 14, 2007
Hey all... I have a simple TSQL statement that joins like 5 tables and contains like 5 records or so that maybe someone can tell me why it's taking 1 minute and 24 seconds of execution time directly from calls made in the SQL Management Studio that is connected to the local CE database (ffgsCRM.sdf).
--CE TSQL USED....................
SELECT ProposalHeader.PropHShipTo, ProposalHeader.PropHNumb, ProposalHeader.PropHAddr1, ProposalHeader.PropHAddr2, ProposalHeader.PropHAddr3,
ProposalHeader.PropHCity, ProposalHeader.PropHState, ProposalHeader.PropHZip, ProposalDetail.PropDLine, ProposalDetail.PropDItem,
ProposalDetail.PropDMfg, ProposalDetail.PropDCat, ProposalDetail.PropDXSellPrice, SalesRep.RepName, SalesRep.RepEmail,
AccountShipTo.ShipToName, AccountShipTo.ShipToCity, AccountShipTo.ShipToState, AccountShipTo.ShipToZip, ProposalHeader.PropHRevNumb,
Product.ProductNumber, Product.ProductDesc, ProposalDetail.PropDShipQty, ProposalDetail.PropDItemTotal, ProposalDetail.PropDCost,
ProposalDetail.PropDSellPrice, ProductAddDesc.ProductADLong, AccountBillTo.BillToName, AccountBillTo.BillToAddr1, AccountBillTo.BillToAddr2,
AccountBillTo.BillToAddr3, AccountBillTo.BillToCity, AccountBillTo.BillToState, AccountBillTo.BillToZip
FROM ProposalHeader INNER JOIN
ProposalDetail ON ProposalHeader.PropHNumb = ProposalDetail.PropDNumb AND
ProposalHeader.PropHRevNumb = ProposalDetail.PropDRevNumb INNER JOIN
AccountShipTo ON ProposalHeader.PropHShipTo = AccountShipTo.ShipToCust INNER JOIN
SalesRep ON AccountShipTo.ShipToRep = SalesRep.RepNumb INNER JOIN
Product ON ProposalDetail.PropDItem = Product.ProductNumber INNER JOIN
AccountBillTo ON AccountShipTo.ShipToBillTo = AccountBillTo.BillToNum LEFT OUTER JOIN
ProductAddDesc ON ProposalDetail.PropDItem = ProductAddDesc.ProductADNumber AND
ProposalDetail.PropDMfg = ProductAddDesc.ProductADMfgID
WHERE (ProposalHeader.PropHNumb = 'billb1') AND (ProposalHeader.PropHRevNumb = '1')
--END TSQL.........................................................
Any Ideas why this is so slow?
Thanks,
Bill
View 9 Replies
View Related
Jan 26, 2007
Hi,
I am very new to SQL, and am basically self-taught(apart from SQL for Dummies and a couple of other books). I am hoping someone can help me with the 'CONVERT' statement.
My system outputs the date format as '12345'. What I have written so far is this;
select Resprj.Re_code, Res.re_desc, resprj.Pr_code, projects.Pr_desc,Res.Re_status1,
Projects.active, Projects.Pr_start, Projects.Pr_end
from res inner join Resprj on (Res.Re_code = resprj.Re_code)
inner join projects on (projects.PR_code = resprj.Pr_code)
and Projects.Pr_desc like '%C9%'
where projects.active =-1
order by Projects.Pr_code, Res.Re_desc
Could someone please help in regards to using the 'CONVERT' statement to change the date from '12345' to dd/mm/yy.
Thanks
Rob
View 6 Replies
View Related
Mar 25, 2008
I have a test db in SQL Server Express 2005 with a 65MB dat file and a 1.1GB log file!! The production db has a slightly larger dat file and only a 6MB log file. I haven't updated by test db from the production db in a couple of months.
I tried Shrink Database and Shrink File in the Management Studio Express, but the log file size hasn't changed.
How did the log file get so large and how do I fix it?
Thanks for any help.
View 3 Replies
View Related
Oct 10, 2000
Hello everyone,
I'm not sure if this is a problem but I've got a database which is about 1700mg in size (at least that's the allocated space on disk) and the log file is over 4600 mb. I've truncated the log file but it still keeps growing. None of our other databases are this large and there are a lot of transactions performed regularly but it looks odd to me that the log is this big when the data is half the size. How can I find out exactly how much space is being taken up by the data and is there anything I can do that will shrink the size of the log file? I am not really a dba so I'm not sure how crucial this is in the grand scheme of things.
Thanks
View 1 Replies
View Related
Aug 16, 2006
We have SQL Server running on a Windows 2003 server, only because Backup Exec requires it. AT the location : C:Program FilesMicrosoft SQL ServerMSSQLData
there is this file: SuperVISorNet_log.LDF which is 15 Gb and is accessed daily. I apologize because I don't know what this is!
My question is: can this file be 'pruned' (for want of a better word) because it's taking up a lot of backup space.
View 17 Replies
View Related
Sep 22, 2006
Hi,
When i open a project in ssis show the message:
"Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file."
I click Yes and my project open normally. Someone know why this happen? My project is small, have one package with any imports excel files to Sql Server 2005.
Thanks
André Rentes
View 1 Replies
View Related
Jan 27, 2004
Right, I'm no SQL programmer. As I type this, I have roughly the third the hair I had at 5 o'clock last night. I even lost sleep over it.
I'm trying to return a list of records from a database holding organisation names. As I've built a table to hold record versions, the key fields (with sample data) from a View I created to display this is as follows:
record_id-----org_id-----live-----version
====== ===== === =====
1-------------1----------0----------1
2-------------2----------0----------1
3-------------1----------1----------2
4-------------2----------0----------2
as you can see the record id will always be unique. record 3 is a newer version of record 1, and 4 of 2. the issue is thus: i only want to return unique organisations. if a version of the organisation record is live on the system (in this case record id 3), i want to return the live version with its unique record id. i'm assuming for this i can perform a simple "SELECT WHERE live = 1" query.
however, some organisations will have no live versions (see org with id 2). i still wish to return a record for this organisation, but in this case the most recent version ie version 2 (and again - its unique record id)
in actual fact, it seems so much clearer when laid out like this. however, i feel it's not going to happen this end, and so any help would be greatly appreciated.
many thanks in advance,
phil
View 2 Replies
View Related
Jul 20, 2005
I am attaching a database with 3 data files.When I execute "exe sp_attache_db..." I obtain this error:database 'POINT' cannot be opened because some of the files could not beactivated.I have deleted its LDF file.Usually I detach my db, then I delete transaction log, and reattach 3 datafiles...Now it doesn'work!!!!!!!!!!Someone can help me?Thanks.
View 1 Replies
View Related
Sep 25, 2006
I have a dtsx package that works fine with one exception. When I open the dtsx package in BI, it gives me the following message:
Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?
When I respond yes, the package opens and I can edit or execute with no problem. Still, I want to understand what could cause this message to occur and, more importantly, how I can get rid of the message. When I try to simply execute the package I still get the same error and it seems this will be a problem for trying to run the package from SQL Server agent.
It seems likely to me that this message refers to the dtsx file (in xml format) itself. Does that make any sense?
View 2 Replies
View Related
Mar 24, 2008
I have an Matrix report in which report output is completely numbers. That is fine but the problem is when i am trying to export it to Excel ,the data is exporting wih error:
Converting numbers stored as text.
i dont know why numbers are Exporting like text format.
please let me know whether it is problem with exporting in ssrs tool itself or else i need to change any properties in RDL file.
Note: This Error i am getting when i am using expression IIF(IsNothing(Fields!Parameter.value),"0",Fields!Parameter.value)
View 5 Replies
View Related
May 1, 2006
In one Data Flow Task (running by itself) I simply have a Raw File Source pushing rows to an OLE DB Command. This command executes an UPDATE command (UPDATE table SET field = ?, anotherfield = ? WHERE thisfield = ?) and performs extremely slow. It's possible to have 62K+ rows needing to be updated and it typically takes this task around 25 - 30 minutes to run.
Is there anything I can do to increase performance?
Are there any options other than the OLE DB Command to perform updates?
Thank you.
View 1 Replies
View Related
Mar 21, 2007
Hi all,
I have a query, rather complex one to deal with more than 1 million rows, used to run 40 minutes in SQL Server 2000 in query analyzer. Now, it has been 10 hours in SQL Server 2005 in management studio. And still has not finished yet! Anything can go wrong here. Basically nothing changes, except for I have my server upgrade from SQL Server 2000 to SQL Server 2005. Seems something is wrong crazy in SQL Server 2005. Any suggestions?
Thanks,
Ning
View 3 Replies
View Related
Apr 26, 2007
Hi,
i created a large Integration Services Package. When I start now the Project an open the Package, I get a Message
"Document contains one or more long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?"
What does this Message mean for me? Which mistake did I?
Thanks a lot
View 1 Replies
View Related
May 6, 2008
Hi,
I've created a dataset with 27 measures and 20 query parameters. When attempting to load the report containing this dataset I'm shown the message;
'Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file.'
If I do open the file it does indeed respond very slowly or even hangs.
I can manually format the XML code but amending the code in any way (i.e. using the layout designer to move a chart) removes my formatting and re-introduces the problem.
Are these an unreasonable amount of measures / parameters?
Environment;
VS2005 v8.0.507
MSSQL 2005 9.00.1399.06 Build 3790 SP2
Windows Server 2003 SP2
Many thanks.
View 1 Replies
View Related
Nov 19, 2007
Can anyone please give me the equivalent tsql for sql server 2000 for the following two queries which works fine in sql server 2005
1
-- Full Table Structure
select t.object_id, t.name as 'tablename', c.name as 'columnname', y.name as 'typename', case y.namewhen 'varchar' then convert(varchar, c.max_length)when 'decimal' then convert(varchar, c.precision) + ', ' + convert(varchar, c.scale)else ''end attrib,y.*from sys.tables t, sys.columns c, sys.types ywhere t.object_id = c.object_idand t.name not in ('sysdiagrams')and c.system_type_id = y.system_type_idand c.system_type_id = y.user_type_idorder by t.name, c.column_id
2
-- PK and Index
select t.name as 'tablename', i.name as 'indexname', c.name as 'columnname' , i.is_unique, i.is_primary_key, ic.is_descending_keyfrom sys.indexes i, sys.tables t, sys.index_columns ic, sys.columns cwhere t.object_id = i.object_idand t.object_id = ic.object_idand t.object_id = c.object_idand i.index_id = ic.index_idand c.column_id = ic.column_idand t.name not in ('sysdiagrams')order by t.name, i.index_id, ic.index_column_id
This sql is extracting some sort of the information about the structure of the sql server database[2005]
I need a sql whihc will return the same result for sql server 2000
View 1 Replies
View Related
Jun 11, 2015
I have been having issues with our SQL server for awhile now. It seems to run out of memory every few days and when I look at the memory dump, the MEMORYCLERK_SQLOPTIMIZER seem to take over memory and eventually cause the server to crash.
Here is the SQL verison we are using: Microsoft SQL Server 2012 (SP1) - 11.0.3460.0 (X64) Jul 22 2014 15:22:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)..It is on a VM on Windows 2012 server. It has 20gb of RAM allocated to it and the MAX Server Memory is set to 16.5gb.
I have seen the MEMORYCLERK_SQLOPTIMIZER grow to about 11gb at the time of the server crash. Why that is happening? What is causing the memoryclerk_sqloptimizer to get so high? I have looked it up and it looks like it has to do with ad hoc requests, but is there something I can do to bring that memory down when it gets so high so that I can prevent a server crash?Do we just need to add more memory or is there a memory leak somewhere?
View 2 Replies
View Related
Mar 27, 2008
Hello everyone,
In reports ,My customer requirement is to display column based on selected criteria in UI .
The columns which are not selected by him will hide.
for that we kept an expression in Visibility --> Hide
Code Snippet
= NOT Parameters!Parameters.Value.ToString().Contains("Name")
then coming to HTML Report
It is working fine,but white space coming at end of the Table.
can't we supress the white space?
The white space width is exactly the width of the column which is hidden.
My designing in layout is wrong?
Else is that Problem with the SSRS?
Experts Please let me Know!!!!
Give me Solution!!!
Customer is strictly focusing on that requirement.
***Note: white Space is Some what Acceptable.But My Reports are very big like 45 columns around.When he selects 10 out of 45 then you can assume how much space is coming????????****
View 4 Replies
View Related
Mar 6, 2015
I have a problem with report built in SSRS and deployed with Dashboard Designer to Sharepoint. There are few filters connected to report, 2 of them are multivalue. Regardless of data returned, when I select too many items in filter, the report is getting super small. It doesn't matter what you select, size changes when you select exact number of items or more. I replaced report with single line (filters where still conected) - result was the same.
Small amount of items selected:
More items selected:
Size of the raport in Dashboard Designer is set to "Percentage of dashboard page", when I selected autosize, result was the same.
View 3 Replies
View Related
Jun 23, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
View 2 Replies
View Related
Aug 9, 2006
Hello Friends, I am not sure if this is the right place to post this question but if not please suggest me when it can be posted.
I have been thinking of writing Stored procs in SQL CLR/ changing all of my Stored Procs to SQL CLR.
Is there any thing that I need to keep in mind about the size of the sotred proc (like calculation intensive) before I do that? I mean can even change a TSQL stored proc which is relatively small Stored Proc, that simply said Select * from Customers to SQL CLR? or the SQL CLR does only useful/makes difference with calculation intensive stored procs and big stored procs?
When I talked to our Architects they said every small sized stored proc can be written using SQL CLR and more over forget about the classic TSQL stored procs and get used to write SQL CLR when ever writing any database related stuff.
And also there are so many articles that discussed about the advantages of SQL CLR over the TSQL but I would appreciate if some one could put few bulletted points why do you think SQL CLR is more powerful.
Please advise.Thanks in advance,-L
View 2 Replies
View Related
Jan 8, 2006
Hi
I am creating some dynamic sql by passing variouse parametrs to my Stored Procedure. One of my Parameters is string has lots of values seperated by a comma to help build an 'IN' statement.
SET @SQL = 'SELECT * FROM areas'SET @p1 = '10,20'If @p1 IS NOT NULLBEGINSET @sSQL = @sSQL + ' WHERE (Areas IN (''' + Replace(@p1,',',''',''') + '''))'END
The above query runs perfecly well in Query Analyser, however when I put it into my ASP.NET application I get an error of "Error converting data type varchar to numeric."
So I think I need to do some sort of casting or Converting but im not sure how to do it. Or do I need to use a INSTRING?
I did manage to work out a method by using the follwoing
SELECT * FROM Areas WHERE PATINDEX('%,' + CAST(ArType AS VARCHAR) + ',%',',' + @p1 + ',') > 0
But I cant seem to convert the above line into coherent dynamic statement. My feeble attempt is below but I keep getting errors
SET @sql = @sql + ' WHERE PATINDEX(''%,'' + CAST(ArType AS VARCHAR) + '',%'','','' + @p1 + '','') > 0'
IM strugging to understand all the '''. My TSQL is pretty basic, any help would be much appreciated
Many thanks in advance
View 1 Replies
View Related
Mar 14, 2001
simple update, I want to update max_seq with max(0rdr_seq) from another table.
how do you?
update h
set max_seq = d.max(ordr_seq)
from h_drug_stage_dup h
join drug_ordr_stage d
on h.patkey = d.patkey and
h.ordr_dtm= d.ordr_dtm and
h.h_drug = d.h_Drug
View 1 Replies
View Related
Jun 5, 2001
Please see below ( in my sub-query I need to say settle_date = post_date +
3 business days )
How would this be done ? Pleas help !!!
declare @PD datetime, @MY_SD datetime
--SELECT @PD = SELECT POST_DATE FROM TRANSACTION_HISTORY
--select @MY_SD = @PD + 3 --T+3
--select @MY_SD = @MY_SD + case datepart(dw, @MY_SD) when 7 then 2 when 1 then 1 else 0 end*/
SELECT
WIRE_ORDER_NUMBER FROM TRANSACTION_HISTORY
WHERE POST_DATE BETWEEN '02/01/2001' AND '02/28/2001' AND
WIRE_ORDER_NUMBER IN
(
SELECT ORDER_NUMBER
FROM TRANSACTION_ARCHIVE WHERE TRANSACTION_ARCHIVE.ORDER_NUMBER = TRANSACTION_HISTORY.WIRE_ORDER_NUMBER
SETTLE_DATE = DATEADD(day, 3, POST_DATE ) case datepart(dw, POST_DATE) when 7 then 2 when 1 then 1 else 0 end))
View 5 Replies
View Related
Oct 1, 2001
Sql Server 7.0
==============
Hi all!
To find out the duplicate entries in a particular column,
I used the following tsql
select pno ,count(pno) from table1 group by pno
having count(pno)>1
But now I have another case where i have to test duplicity as a combination of 3 columns.
ie, for Eg :I have 3 columns with the following values.
colA colB colC
1 2 3===============row 1
1 3 5===============row 2
1 2 3===============row 3
1 4 5===============row 4
8 9 0===============row 5
I want to pick up all the duplicate rows(combo of colA,colB,colC)
duplicate rows here would be row 1 and row 3.
Can somebody give me a clue as to how to achieve this via TSQL.
Any help greatly appreciated.
TIA
Kinnu
View 1 Replies
View Related