I have a test db in SQL Server Express 2005 with a 65MB dat file and a 1.1GB log file!! The production db has a slightly larger dat file and only a 6MB log file. I haven't updated by test db from the production db in a couple of months.
I tried Shrink Database and Shrink File in the Management Studio Express, but the log file size hasn't changed.
How did the log file get so large and how do I fix it?
Hello everyone, I'm not sure if this is a problem but I've got a database which is about 1700mg in size (at least that's the allocated space on disk) and the log file is over 4600 mb. I've truncated the log file but it still keeps growing. None of our other databases are this large and there are a lot of transactions performed regularly but it looks odd to me that the log is this big when the data is half the size. How can I find out exactly how much space is being taken up by the data and is there anything I can do that will shrink the size of the log file? I am not really a dba so I'm not sure how crucial this is in the grand scheme of things. Thanks
We have SQL Server running on a Windows 2003 server, only because Backup Exec requires it. AT the location : C:Program FilesMicrosoft SQL ServerMSSQLData there is this file: SuperVISorNet_log.LDF which is 15 Gb and is accessed daily. I apologize because I don't know what this is!
My question is: can this file be 'pruned' (for want of a better word) because it's taking up a lot of backup space.
I administer an application that runs on SQL 2000, without being an SQL expert myself. I have observed that if I create a maintenance plan then the log file gets trimmed, but otherwise the log file keeps growing. All was going well at one site, but recently the log file has become huge. The server is also running out of disk space. Is that the problem? I'm not sure how to control this. Can I delete a log and then it will re-create? Two smaller databases share the same maintenance plan and these logs are small. Thank you.
I have a database which is having a small log file and a huge mdf file which around 3gb, i tried to shrink through the enterprise manager options but no luck. Even through taskpad i find the table sizes are less than 1 gb , kindly help me in solving this problem
I am getting the follwoing error when running my package.
An OLE DB record is available. Source: "Microsoft OLE DB Provider for Visual FoxPro" Hresult: 0x80004005 Description: "File c:docume~1sortizlocals~1 emp 0003589001o.tmp is too large.".
I use a select statement to join to large foxpro tables. The data contains over a million records.
My database's log file is getting very large in a few days. There are only 2 users and they do not do a lot of work. But it gets larger and larger every day.
Autogrow is enabled. File Growth is 'in percent' and Maximum file Size 'Restricted File Growth-500MB'
I have a problem where I want to import a 1.6 GB XML file with SSIS into a SQL Server database. My hunch is that SSIS is not very good with handling such large amount of XML data. My test shows that SSIS tries to read all of the file into memory.
Does anyone know if there is any solution of solving this memory problem. My problem is that I want to take this source XML file import it into a database, make some transformations on it (eliminate duplicates etc) then produce a NEW XML file as output in a different XSD-format.
Is really SSIS the right tool for this operation?
The source XML file also have mixed content on Complex Types which seems to be a problem for SSIS as well.
I can't find the answer to these problems. I'm using SQL Server 2005 Developer Edition. I have a report with 15,036 rows and about 15 columns. It utilizes a table and a report header and footer. One column has data containing hyperlinks to another report.
1) When I export this report to Excel, the result is 14.2MB file. Why is this file so huge, and how can I make it smaller?
2) Is there anything I can do to prevent the inclusion of the hyperlinks in the exported Excel file?
When we do a full backup the .bak file is 700MB but in the meanwhile our differential backup has grown to a size of 20GB. The backup set expires in 1 day and if the backup file already exists we "append" to it.
What we basically want is a differential backup for one day only. I realize that in SQL Server 2005 you could add a "Clean up" task but that is to delete files but because we only have 1 .bak file for the differential backup this is not an option.
If we set to overwrite the backup file if it already exists what does this mean? Assume that we run a differential backup every hour, does this mean that the differential backup file will be overwritten every hour?
How can we make sure that we have a differential backup every hour and keep only the differential backups for the last 24 hours?
When using DTS (in SQL 7) to export via OLE DB a large varchar to a text file, it clips it at 255 chars. No other data access drivers seem to work, either. This is lame! I cannot use bcp as a work around, because i want to use quoted comma-delimited, which it doesn't support, and I am using query-based export, where the query calls a stored proc, which bcp also doesn't support.
Are there any new versions of MDAC that fix this? Anyone know a workaround? My current hack fix is to split my field into 2, but this is a grubby fix that hassles my reciptients.
This is a pretty fundamental limitation to a major product!
I have inherited a SQL 2000 database ( (I am new to SQL DBA) and I found this when I was checking the db properites . The transaction log has grown bigger than the actual data file, I thought transaction log backups would truncate the inactive portion of the log file and shrink the transaction log, but it was not the case it seems, may be it was truncating the inactive portion of the log, but not shrinking it. This site does not have a job for truncating the data/log files periodically. What is the best method to deal this situation, how can I shrink the Transaction log quickly?,
We have recently added a new file group and file on a new drive. We have tested it by moving a small table to the new file group. We would like to relocate a new table to this file group but the table is about (we estimate) 75GB. My question is this: How long can we expect the transfer of data from the current file group to the new one for this table? I understand that depending on our hardware the answer may vary but does anyone have a rough estimate?
The current current (primary) file is located on a DELL SAN and the new secondary group is on a EMC 4700 both are connected via fiber channel.
Also a bonus question would be: Does a "normal" database backup created as a maintenance plan backup the seconary data as well into the BAK file?
Hi everybody, On executing the RESTORE command of SQL Server to restore from a backup of 78.3 MB, the "Server Application Unavailable" error message comes up.The error message in the Application log is as follows:aspnet_wp.exe (PID: 2184) was recycled because memory consumption exceeded the 152 MB (60 percent of available RAM). However using Query Analyser of SQL Server I am able to restore the database. What is the solution to this problem?
hi.. I want to store a RMVB file to SQL SERVER 2000 ,and read from it,iwant to play the RMVB file in web,the size of the RMVB file is more than 300MB less 1G.the SQL Field Image can include it. Now My Quesstion is How can i Store and Read the RMVB file from SQL Server2000? I used SqlInsertCommand.ExecuteNoquery() in my program,but it Too slow,ao make a unknown error. Thank you for your help.
Hi I want to store large files like pdf file,Html page,audio file in Sql Server database.How can i do it? if somebody know then tell me as soon as possible. Thanks in advance. Bye
We have a 5 TB database in our environment. Both MDF & LDF are location in 1 single drive which is of 10 TB.
Now, we want to move to new server but we have multiple drives each of max 1 TB per drive. How can I go about splitting the data from 1 MDF files into multiple data files? How about moving indexes ?
SQL Version : Microsoft SQL Server 2012 (SP1) - 11.0.3513.0 (X64) - Enterprise Edition (64-bit)
I am very new to SQL, and am basically self-taught(apart from SQL for Dummies and a couple of other books). I am hoping someone can help me with the 'CONVERT' statement.
My system outputs the date format as '12345'. What I have written so far is this;
select Resprj.Re_code, Res.re_desc, resprj.Pr_code, projects.Pr_desc,Res.Re_status1, Projects.active, Projects.Pr_start, Projects.Pr_end from res inner join Resprj on (Res.Re_code = resprj.Re_code) inner join projects on (projects.PR_code = resprj.Pr_code) and Projects.Pr_desc like '%C9%' where projects.active =-1 order by Projects.Pr_code, Res.Re_desc
Could someone please help in regards to using the 'CONVERT' statement to change the date from '12345' to dd/mm/yy.
I have a large fact table about 500 million rows, and I am using 2008 r2, thus I am having the file system error, I have browsed online and tried all the fix , but I am still having the error . I tried taking only about year data (which was still around 200 million records) and I was still having the error.
I have several databases that have grown to 300 GB and would like to distribute the data into multiple files across multiple drives. Can I create a new database that is spread across the new drives and use a full backup to restore or am I stuck with unloading the data table by table?
I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.
A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly). When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!! When i start the job, the DTS package is running Very Very Slow!!!! sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem? I Need Help Quick!!! Thank You
I have a huge speed issue on one or two of my SQL Tables. I have included the basic design below.
Structure Id ParentId Name
Group Id ParentId Name Weight
Products Id Name
StructureProducts StructureId ProductId Imported
StructureGroups StructureId GroupId
GroupProducts GroupId ProductId
AnswerDates Id AssessmentDate
Scores <-- This table is the slow table AnswerDateId StructureId GroupId (nullable) ProductId (nullable) Score >= 0 && <= 100
Ok, Structures are the start of everything. Structures, have children. If a group/product is Linked to a parent or child structure then that group/product is visible along the structure tree flow path. Groups, like structure have children. And also like structures, if a group is given a product, then that product is visible through the structure tree flow path.
Example: Earth [Structure] - Asia [Structure] --- China [Structure] --- Japan [Structure] ----- Computer Stuff [Group] ------- Desktops [Group] ------- Servers [Group] ------- Laptops [Group] --------- HP [Product] --------- Dell [Product] --------- Fujitsu [Product] - Europe [Structure] --- Germany [Structure] ----- Berlin [Structure] --- Italy [Structure] ----- Rome [Structure] ----- Venice [Structure] - America [Structure] --- United States of America [Structure] ----- New York [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Washington [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Chicago [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] - Africa [Structure] --- South Africa [Structure] ----- Johannesburg [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Durban [Structure] ----- Capte Town [Structure] - Australasia [Structure]
So the initial steps that happen (with regards to Scoring) are as follows. 1. Insert root score (which would be for a structure, a group, an answer date and either a product or no product 2. Score the next group up along the treeview, using the scores for the groups at the same level as the original group (0 score if no score exists). 3. Continue this till GroupTree is at root (parentid == null) 4. Using the next structure up along the treeview, repeat steps 2 & 3. 5. Continue steps 4 until Structuree is at root (parentid == null)
Example Scoring a product for Johannesburg Acer Laptop would go as follows 1. Initial score for [Acer] product against Group [Laptop] for Johannesburg. 2. Calculate Score for all products (productid = null) against Laptop for Johannesburg 3. Calculate Score for [Acer] product against Group [Computer Stuff] for Johannesburg 4. Calculate Score for all products against Group [computer Stuff] for Johannesburg 5. Calculate score for [Acer] product against all root groups for Johannesburg 5.1. Group [Comptuer Stuff] and [Home Stuff] 6. Calculate score for all products against all root groups for Johannesburg 6.1. Group [Comptuer Stuff] and [Home Stuff] 7. Calculate score for [Acer] Product against Group Laptop for South Africa 8. Calculate Score for all products (productid = null) against Laptop for South Africa 9. Calculate Score for [Acer] product against Group [Computer Stuff] for South Africa 10. Calculate Score for all products against Group [computer Stuff] for South Africa 11. Calculate score for [Acer] product against all root groups for South Africa 11.1. Group [Comptuer Stuff] and [Home Stuff] 12. Calculate score for all products against all root groups for South Africa 12.1. Group [Comptuer Stuff] and [Home Stuff] 13. Calculate score for [Acer] Product against Group Laptop for Africa 14. Calculate Score for all products (productid = null) against Laptop for Africa 15. Calculate Score for [Acer] product against Group [Computer Stuff] for Africa 16. Calculate Score for all products against Group [computer Stuff] for Africa 17. Calculate score for [Acer] product against all root groups for Africa 17.1. Group [Comptuer Stuff] and [Home Stuff] 18. Calculate score for all products against all root groups for Africa 18.1. Group [Comptuer Stuff] and [Home Stuff] etc. etc. etc...
This basicly coveres the concept behind the basic scoring methodology. Now the methodology splits into 2. The first Methodology 1, say it should do these calculations using the Exact same date as the original scored date. (Ie. if i do a score today, only scores on today will be used in the calculations). The other, Methodology 2, says that it should do the calculations on the latest available date. (Ie. If i do a score today, only scores from today and the latest before today will be used in the calculations).
Now to add another problem to this already complex process, is that each Group and each product within a structure can have either of the 2 scoring methodologies assigned to it. Also, products can only be scored against the structures and groups that they are assigned to. Ie, Acer exists in Laptop Group, in Johannesburg or South Africa or Africa, but doesnt exist in New York.
Ok, so now that i've explained briefly how this scoring works, let me get to the heart of the problem. Basicly its speed (can clearly see why), though the speed issue only comes up in 1 Place. And that is where it has to look backwards for the latest available score for the required group, structure and product.
For this to happen i wrote a function ALTER FUNCTION [dbo].[GetLatestAnswerDateId] ( @StructureId INT, @GroupId INT, @ProductId INT, @AnswerDateId INT ) RETURNS INT AS BEGIN DECLARE @Id INT DECLARE @Date DATETIME
SELECT TOP 1 @Date = [Date] FROM [dbo].[AnswerDate] WHERE [Id] = ISNULL(@AnswerDateId, [Id]) ORDER BY [Date] DESC
SELECT TOP 1 @Id = ad.id--gs.[AnswerDateId] FROM [dbo].[Scoring] gs INNER JOIN [dbo].[AnswerDate] ad ON ad.Id = gs.AnswerDateId WHERE [StructureId] = @StructureId AND ISNULL([GroupId], -1) = ISNULL(@GroupId, -1) AND ISNULL([ProductId], -1) = ISNULL(@ProductId, -1) AND [Date] <= @Date ORDER BY [Date] DESC
RETURN @Id END
Now on small amounts of data (1000 rows or so) its quick, though that is due to the fact that the data is minimal, but on large amounts of data this function runs for along time. Specificly in the context of the following when there is 6 months of scoring data (100 000+ rows) to peruse.
SELECT [StructureId], [GroupId], [AnswerDateId], [ProductId], [Score] FROM [Scoring] WHERE AnswerDateId = GetLatestAnswerDateId([Structure], [GroupId], [ProductId], null) AND [StructureId] = South Africa AND [GroupId] = Computer Stuff AND [ProductId] = Acer
Any idea's on how to make this quick? or quicker?
My Current runtime for calculating the 2500 base scores (totals 100 000+- rows) takes 15 hours. Though this is an initial calculation and is only supposed to be done once. Also, this calculations are all correct, so my only issue itself is the speed of the entire process.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
"Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file."
I click Yes and my project open normally. Someone know why this happen? My project is small, have one package with any imports excel files to Sql Server 2005.
Hello! I have a query that join five tables and returns around 45000 rows that takes no more than a minute to execute, in management studio, on a SQL Server 2005, 2CPU 32 bit(dual core), 4GB and RAID5 disk system. The O/S is Windows 2003 sp2 Standard Edition.
When the same query is executed in SSRS2005, with some drilldown and summary of drilldown levels, it never stops to execute.
Looking at the hardware in the performance monitor reveals nothing strange except that % CPU-time is around 40 percent. Memory resources over 2 GB are available all the time.
Any suggestions is appreciated!
Any problems with SQL Server 2005 source database running on SQL Server 2000 compatibility level?
Right, I'm no SQL programmer. As I type this, I have roughly the third the hair I had at 5 o'clock last night. I even lost sleep over it.
I'm trying to return a list of records from a database holding organisation names. As I've built a table to hold record versions, the key fields (with sample data) from a View I created to display this is as follows:
as you can see the record id will always be unique. record 3 is a newer version of record 1, and 4 of 2. the issue is thus: i only want to return unique organisations. if a version of the organisation record is live on the system (in this case record id 3), i want to return the live version with its unique record id. i'm assuming for this i can perform a simple "SELECT WHERE live = 1" query.
however, some organisations will have no live versions (see org with id 2). i still wish to return a record for this organisation, but in this case the most recent version ie version 2 (and again - its unique record id)
in actual fact, it seems so much clearer when laid out like this. however, i feel it's not going to happen this end, and so any help would be greatly appreciated.