Hello everyone,
I'm not sure if this is a problem but I've got a database which is about 1700mg in size (at least that's the allocated space on disk) and the log file is over 4600 mb. I've truncated the log file but it still keeps growing. None of our other databases are this large and there are a lot of transactions performed regularly but it looks odd to me that the log is this big when the data is half the size. How can I find out exactly how much space is being taken up by the data and is there anything I can do that will shrink the size of the log file? I am not really a dba so I'm not sure how crucial this is in the grand scheme of things.
Thanks
I have a test db in SQL Server Express 2005 with a 65MB dat file and a 1.1GB log file!! The production db has a slightly larger dat file and only a 6MB log file. I haven't updated by test db from the production db in a couple of months. I tried Shrink Database and Shrink File in the Management Studio Express, but the log file size hasn't changed. How did the log file get so large and how do I fix it? Thanks for any help.
We have SQL Server running on a Windows 2003 server, only because Backup Exec requires it. AT the location : C:Program FilesMicrosoft SQL ServerMSSQLData there is this file: SuperVISorNet_log.LDF which is 15 Gb and is accessed daily. I apologize because I don't know what this is!
My question is: can this file be 'pruned' (for want of a better word) because it's taking up a lot of backup space.
I often in my job come across the following scenario:
Client rings up and says Run out of server space due to SQL 2000 Transaction log has consumed all the space or has consumed a very large portion of it.
what is the correct procedure in resolving this ASAP working with Full mode SQl 2000 Databases. as i have other guys within my company that all do different things with good result and sometimes bad results.
The procedure i use is the following:
METHOD 1
1.backup both database and transaction log 2.Right click the database and select Detach, which from my understanding is a clean detach method which ensures that uncommited transactions are commited to the database. 3. Rename the old transaction log to .ldfOLD 4. REATTACH Database which creates a new transaction log.
METHOD 2
1. I dont use this method but im pretty sure its risky and if possible can someone provide me with the reasons why:
1.Change database method from full to simple mode, shrink logs and then change back to full mode.
basically what i am asking is what is the fastest way to sort out the above issue most effectively and with the abilty to Roleback succesfully.
PLEASE dont comment on why is the transaction log so big as i dont want to look into that now all i am sking is what is the most effective method to shrink the log down and save space.
I'm working with a database that has a relatively small amount of data. The size of the data file for this database is in the 100-110 MB range, which gives an idea of its size and how much data is in it. What is slightly troubling and baffling, is that the transaction file for this same database is significantly larger than the data file, at about 2.5 GB. What could be happening that is causing this transaction log to be larger than the actual database size? Thx
I am new to SQL and might be missing something very easy. I have a situation where the space allocated to the transaction log of a database is extremly large (5 Gig). I can not manually reduce it. This gives me a "Error 21335: [SQL-DMO]The new DBfile size must be larger than its current size." This is a problem because the increase in size has taken all available space on the server.
i have a few tables using Sql Server 2005 Express. currently they are holding roughly 30-40k records in them. i have my log files set at restricted growth to 90 megs. while im not close to reaching that, i would like my tables to be able to scale up to possibly millions of records. based on that, i figure the transaction log file will prolly need to have a higher threshold (unrestricted growth). for those with experience, for tables that have millions of records, what are the average size log files i could expect. is it a bad idea to just shrink the log file every night during off peak hours so that regardless of the amount of records i have, ill always start the day with a minimal log file? do large log files have any effect on SQL performance?
I have a huge table with ~30G data, and a column needs to be updated. In order to avoid a huge transaction, what I did was setting up a loop, update part of the records in each loop. The query is like following:
Declare @mo smalldatetime Declare MOs cursor for Select [a month] from [a table]
Open MOs Fetch Next from MOs into @mo while @@FETCH_STATUS = 0 Begin exec sp_UpdateColumn @mo -- PRINT @mo Fetch Next from MOs into @mo end close MOs deallocate MOs
sp_UpdateColumn is the query that actually does the update, the code is like following:
Create Procedure sp_UpdateColumn @updMoDate smalldatetimeAS [do some calculation here, store results in a temp table #Temp1] Begin TranUpdate A set col =b.col from [big table] A, #Temp1 B where [some matching conditions]Commit GO
The BEGIN TRAN and COMMIT lines were meant to break up transaction, however, our database support people still tell me that a huge transaction has generated a GB sized log file that blocked the drive. Unless the transaction wasn't really splitted this should not happen. Can someone help me take a look at the code and tell me is there anything wrong? Thanks
I have inherited a SQL 2000 database ( (I am new to SQL DBA) and I found this when I was checking the db properites . The transaction log has grown bigger than the actual data file, I thought transaction log backups would truncate the inactive portion of the log file and shrink the transaction log, but it was not the case it seems, may be it was truncating the inactive portion of the log, but not shrinking it. This site does not have a job for truncating the data/log files periodically. What is the best method to deal this situation, how can I shrink the Transaction log quickly?,
I have a table that’s about 3 gigs, using this table and a few others I’m making another table. The problem is when making the new table my transaction log inflates so much that I’m running out of disk space. What I can I do to prevent this or to keep the transaction log size under control?
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
I am very new to SQL, and am basically self-taught(apart from SQL for Dummies and a couple of other books). I am hoping someone can help me with the 'CONVERT' statement.
My system outputs the date format as '12345'. What I have written so far is this;
select Resprj.Re_code, Res.re_desc, resprj.Pr_code, projects.Pr_desc,Res.Re_status1, Projects.active, Projects.Pr_start, Projects.Pr_end from res inner join Resprj on (Res.Re_code = resprj.Re_code) inner join projects on (projects.PR_code = resprj.Pr_code) and Projects.Pr_desc like '%C9%' where projects.active =-1 order by Projects.Pr_code, Res.Re_desc
Could someone please help in regards to using the 'CONVERT' statement to change the date from '12345' to dd/mm/yy.
I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly). When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!! When i start the job, the DTS package is running Very Very Slow!!!! sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem? I Need Help Quick!!! Thank You
I have a huge speed issue on one or two of my SQL Tables. I have included the basic design below.
Structure Id ParentId Name
Group Id ParentId Name Weight
Products Id Name
StructureProducts StructureId ProductId Imported
StructureGroups StructureId GroupId
GroupProducts GroupId ProductId
AnswerDates Id AssessmentDate
Scores <-- This table is the slow table AnswerDateId StructureId GroupId (nullable) ProductId (nullable) Score >= 0 && <= 100
Ok, Structures are the start of everything. Structures, have children. If a group/product is Linked to a parent or child structure then that group/product is visible along the structure tree flow path. Groups, like structure have children. And also like structures, if a group is given a product, then that product is visible through the structure tree flow path.
Example: Earth [Structure] - Asia [Structure] --- China [Structure] --- Japan [Structure] ----- Computer Stuff [Group] ------- Desktops [Group] ------- Servers [Group] ------- Laptops [Group] --------- HP [Product] --------- Dell [Product] --------- Fujitsu [Product] - Europe [Structure] --- Germany [Structure] ----- Berlin [Structure] --- Italy [Structure] ----- Rome [Structure] ----- Venice [Structure] - America [Structure] --- United States of America [Structure] ----- New York [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Washington [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Chicago [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] - Africa [Structure] --- South Africa [Structure] ----- Johannesburg [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Durban [Structure] ----- Capte Town [Structure] - Australasia [Structure]
So the initial steps that happen (with regards to Scoring) are as follows. 1. Insert root score (which would be for a structure, a group, an answer date and either a product or no product 2. Score the next group up along the treeview, using the scores for the groups at the same level as the original group (0 score if no score exists). 3. Continue this till GroupTree is at root (parentid == null) 4. Using the next structure up along the treeview, repeat steps 2 & 3. 5. Continue steps 4 until Structuree is at root (parentid == null)
Example Scoring a product for Johannesburg Acer Laptop would go as follows 1. Initial score for [Acer] product against Group [Laptop] for Johannesburg. 2. Calculate Score for all products (productid = null) against Laptop for Johannesburg 3. Calculate Score for [Acer] product against Group [Computer Stuff] for Johannesburg 4. Calculate Score for all products against Group [computer Stuff] for Johannesburg 5. Calculate score for [Acer] product against all root groups for Johannesburg 5.1. Group [Comptuer Stuff] and [Home Stuff] 6. Calculate score for all products against all root groups for Johannesburg 6.1. Group [Comptuer Stuff] and [Home Stuff] 7. Calculate score for [Acer] Product against Group Laptop for South Africa 8. Calculate Score for all products (productid = null) against Laptop for South Africa 9. Calculate Score for [Acer] product against Group [Computer Stuff] for South Africa 10. Calculate Score for all products against Group [computer Stuff] for South Africa 11. Calculate score for [Acer] product against all root groups for South Africa 11.1. Group [Comptuer Stuff] and [Home Stuff] 12. Calculate score for all products against all root groups for South Africa 12.1. Group [Comptuer Stuff] and [Home Stuff] 13. Calculate score for [Acer] Product against Group Laptop for Africa 14. Calculate Score for all products (productid = null) against Laptop for Africa 15. Calculate Score for [Acer] product against Group [Computer Stuff] for Africa 16. Calculate Score for all products against Group [computer Stuff] for Africa 17. Calculate score for [Acer] product against all root groups for Africa 17.1. Group [Comptuer Stuff] and [Home Stuff] 18. Calculate score for all products against all root groups for Africa 18.1. Group [Comptuer Stuff] and [Home Stuff] etc. etc. etc...
This basicly coveres the concept behind the basic scoring methodology. Now the methodology splits into 2. The first Methodology 1, say it should do these calculations using the Exact same date as the original scored date. (Ie. if i do a score today, only scores on today will be used in the calculations). The other, Methodology 2, says that it should do the calculations on the latest available date. (Ie. If i do a score today, only scores from today and the latest before today will be used in the calculations).
Now to add another problem to this already complex process, is that each Group and each product within a structure can have either of the 2 scoring methodologies assigned to it. Also, products can only be scored against the structures and groups that they are assigned to. Ie, Acer exists in Laptop Group, in Johannesburg or South Africa or Africa, but doesnt exist in New York.
Ok, so now that i've explained briefly how this scoring works, let me get to the heart of the problem. Basicly its speed (can clearly see why), though the speed issue only comes up in 1 Place. And that is where it has to look backwards for the latest available score for the required group, structure and product.
For this to happen i wrote a function ALTER FUNCTION [dbo].[GetLatestAnswerDateId] ( @StructureId INT, @GroupId INT, @ProductId INT, @AnswerDateId INT ) RETURNS INT AS BEGIN DECLARE @Id INT DECLARE @Date DATETIME
SELECT TOP 1 @Date = [Date] FROM [dbo].[AnswerDate] WHERE [Id] = ISNULL(@AnswerDateId, [Id]) ORDER BY [Date] DESC
SELECT TOP 1 @Id = ad.id--gs.[AnswerDateId] FROM [dbo].[Scoring] gs INNER JOIN [dbo].[AnswerDate] ad ON ad.Id = gs.AnswerDateId WHERE [StructureId] = @StructureId AND ISNULL([GroupId], -1) = ISNULL(@GroupId, -1) AND ISNULL([ProductId], -1) = ISNULL(@ProductId, -1) AND [Date] <= @Date ORDER BY [Date] DESC
RETURN @Id END
Now on small amounts of data (1000 rows or so) its quick, though that is due to the fact that the data is minimal, but on large amounts of data this function runs for along time. Specificly in the context of the following when there is 6 months of scoring data (100 000+ rows) to peruse.
SELECT [StructureId], [GroupId], [AnswerDateId], [ProductId], [Score] FROM [Scoring] WHERE AnswerDateId = GetLatestAnswerDateId([Structure], [GroupId], [ProductId], null) AND [StructureId] = South Africa AND [GroupId] = Computer Stuff AND [ProductId] = Acer
Any idea's on how to make this quick? or quicker?
My Current runtime for calculating the 2500 base scores (totals 100 000+- rows) takes 15 hours. Though this is an initial calculation and is only supposed to be done once. Also, this calculations are all correct, so my only issue itself is the speed of the entire process.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
"Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file."
I click Yes and my project open normally. Someone know why this happen? My project is small, have one package with any imports excel files to Sql Server 2005.
Hello! I have a query that join five tables and returns around 45000 rows that takes no more than a minute to execute, in management studio, on a SQL Server 2005, 2CPU 32 bit(dual core), 4GB and RAID5 disk system. The O/S is Windows 2003 sp2 Standard Edition.
When the same query is executed in SSRS2005, with some drilldown and summary of drilldown levels, it never stops to execute.
Looking at the hardware in the performance monitor reveals nothing strange except that % CPU-time is around 40 percent. Memory resources over 2 GB are available all the time.
Any suggestions is appreciated!
Any problems with SQL Server 2005 source database running on SQL Server 2000 compatibility level?
Right, I'm no SQL programmer. As I type this, I have roughly the third the hair I had at 5 o'clock last night. I even lost sleep over it.
I'm trying to return a list of records from a database holding organisation names. As I've built a table to hold record versions, the key fields (with sample data) from a View I created to display this is as follows:
as you can see the record id will always be unique. record 3 is a newer version of record 1, and 4 of 2. the issue is thus: i only want to return unique organisations. if a version of the organisation record is live on the system (in this case record id 3), i want to return the live version with its unique record id. i'm assuming for this i can perform a simple "SELECT WHERE live = 1" query.
however, some organisations will have no live versions (see org with id 2). i still wish to return a record for this organisation, but in this case the most recent version ie version 2 (and again - its unique record id)
in actual fact, it seems so much clearer when laid out like this. however, i feel it's not going to happen this end, and so any help would be greatly appreciated.
I am attaching a database with 3 data files.When I execute "exe sp_attache_db..." I obtain this error:database 'POINT' cannot be opened because some of the files could not beactivated.I have deleted its LDF file.Usually I detach my db, then I delete transaction log, and reattach 3 datafiles...Now it doesn'work!!!!!!!!!!Someone can help me?Thanks.
I have a dtsx package that works fine with one exception. When I open the dtsx package in BI, it gives me the following message:
Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?
When I respond yes, the package opens and I can edit or execute with no problem. Still, I want to understand what could cause this message to occur and, more importantly, how I can get rid of the message. When I try to simply execute the package I still get the same error and it seems this will be a problem for trying to run the package from SQL Server agent.
It seems likely to me that this message refers to the dtsx file (in xml format) itself. Does that make any sense?
I have a package that reads the contents of 11 Excel files into various tables. Opening this package in the designer, or with DTExecUI is extremely slow. In both cases when I open the package is takes over 10 minutes to do anything. Visual Studio gives the "Visual Studio is Busy" message for 10 minutes and DTExecUI just hangs. DTExecUI actually hangs twice, once when opening the package and a second time when clicking "Execute" (totalling over 20 minutes). It seems like no matter how I try to run the package it will always hang for 10 minutes before running with no status message or anything. Once it runs it completes quickly with no errors.
The various tasks in the package are fairly simple, most being Source > Data Conversion > Destination.
I have an Matrix report in which report output is completely numbers. That is fine but the problem is when i am trying to export it to Excel ,the data is exporting wih error: Converting numbers stored as text. i dont know why numbers are Exporting like text format. please let me know whether it is problem with exporting in ssrs tool itself or else i need to change any properties in RDL file. Note: This Error i am getting when i am using expression IIF(IsNothing(Fields!Parameter.value),"0",Fields!Parameter.value)
In one Data Flow Task (running by itself) I simply have a Raw File Source pushing rows to an OLE DB Command. This command executes an UPDATE command (UPDATE table SET field = ?, anotherfield = ? WHERE thisfield = ?) and performs extremely slow. It's possible to have 62K+ rows needing to be updated and it typically takes this task around 25 - 30 minutes to run.
Is there anything I can do to increase performance? Are there any options other than the OLE DB Command to perform updates?
I have a query, rather complex one to deal with more than 1 million rows, used to run 40 minutes in SQL Server 2000 in query analyzer. Now, it has been 10 hours in SQL Server 2005 in management studio. And still has not finished yet! Anything can go wrong here. Basically nothing changes, except for I have my server upgrade from SQL Server 2000 to SQL Server 2005. Seems something is wrong crazy in SQL Server 2005. Any suggestions?
i'm experiencing an extremely slow connection from a WXPP Sp2 client to a MSSQL2000 running on a W2k server. The client is running a VB6 application that connect with Windows authentication: every form requesting data opens with a long delay at the first launch; next attempts run normally fast.
In the same LAN there are some others identical clients, all running fine.
Every other network activity from that client is ok.
I'm working within VS2005/Business Intel studio environment. I've got one master package, which loads about 18 sub-packages as tasks.
After openin the master package (and waiting 5-10 minutes for the packages to open and validate), maneuvering within the IDE is nearly impossible, it is so slow. Context menus can take 30 seconds to open. Certian operations, like closing a window, seem to hang the environment.
Does anyone have any feedback about this kind of ide performance problem?
i created a large Integration Services Package. When I start now the Project an open the Package, I get a Message
"Document contains one or more long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?"
What does this Message mean for me? Which mistake did I?
I've created a dataset with 27 measures and 20 query parameters. When attempting to load the report containing this dataset I'm shown the message;
'Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file.'
If I do open the file it does indeed respond very slowly or even hangs.
I can manually format the XML code but amending the code in any way (i.e. using the layout designer to move a chart) removes my formatting and re-introduces the problem.
Are these an unreasonable amount of measures / parameters?
Environment; VS2005 v8.0.507 MSSQL 2005 9.00.1399.06 Build 3790 SP2 Windows Server 2003 SP2
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have been having issues with our SQL server for awhile now. It seems to run out of memory every few days and when I look at the memory dump, the MEMORYCLERK_SQLOPTIMIZER seem to take over memory and eventually cause the server to crash.
Here is the SQL verison we are using: Microsoft SQL Server 2012 (SP1) - 11.0.3460.0 (X64) Jul 22 2014 15:22:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)..It is on a VM on Windows 2012 server. It has 20gb of RAM allocated to it and the MAX Server Memory is set to 16.5gb.
I have seen the MEMORYCLERK_SQLOPTIMIZER grow to about 11gb at the time of the server crash. Why that is happening? What is causing the memoryclerk_sqloptimizer to get so high? I have looked it up and it looks like it has to do with ad hoc requests, but is there something I can do to bring that memory down when it gets so high so that I can prevent a server crash?Do we just need to add more memory or is there a memory leak somewhere?
Hello everyone, In reports ,My customer requirement is to display column based on selected criteria in UI . The columns which are not selected by him will hide. for that we kept an expression in Visibility --> Hide
Code Snippet = NOT Parameters!Parameters.Value.ToString().Contains("Name")
then coming to HTML Report It is working fine,but white space coming at end of the Table. can't we supress the white space? The white space width is exactly the width of the column which is hidden. My designing in layout is wrong? Else is that Problem with the SSRS? Experts Please let me Know!!!! Give me Solution!!! Customer is strictly focusing on that requirement.
***Note: white Space is Some what Acceptable.But My Reports are very big like 45 columns around.When he selects 10 out of 45 then you can assume how much space is coming????????****