We have SQL Server running on a Windows 2003 server, only because Backup Exec requires it. AT the location : C:Program FilesMicrosoft SQL ServerMSSQLData
there is this file: SuperVISorNet_log.LDF which is 15 Gb and is accessed daily. I apologize because I don't know what this is!
My question is: can this file be 'pruned' (for want of a better word) because it's taking up a lot of backup space.
I have a test db in SQL Server Express 2005 with a 65MB dat file and a 1.1GB log file!! The production db has a slightly larger dat file and only a 6MB log file. I haven't updated by test db from the production db in a couple of months. I tried Shrink Database and Shrink File in the Management Studio Express, but the log file size hasn't changed. How did the log file get so large and how do I fix it? Thanks for any help.
Hello everyone, I'm not sure if this is a problem but I've got a database which is about 1700mg in size (at least that's the allocated space on disk) and the log file is over 4600 mb. I've truncated the log file but it still keeps growing. None of our other databases are this large and there are a lot of transactions performed regularly but it looks odd to me that the log is this big when the data is half the size. How can I find out exactly how much space is being taken up by the data and is there anything I can do that will shrink the size of the log file? I am not really a dba so I'm not sure how crucial this is in the grand scheme of things. Thanks
I have inherited some databases whith extremely large Log files. I tried the truncate transaction log but did not work. Can some body please tell me how to truncate these log files.
I have some Large flat fiiles that I need to export to my SQL Server database. The file sizes range from 16 MB to 116 MB. I've tried to save the files to an excel sread sheet and then export them in that format, but that didn't work. does anyone have any suggestions?
i have a few tables using Sql Server 2005 Express. currently they are holding roughly 30-40k records in them. i have my log files set at restricted growth to 90 megs. while im not close to reaching that, i would like my tables to be able to scale up to possibly millions of records. based on that, i figure the transaction log file will prolly need to have a higher threshold (unrestricted growth). for those with experience, for tables that have millions of records, what are the average size log files i could expect. is it a bad idea to just shrink the log file every night during off peak hours so that regardless of the amount of records i have, ill always start the day with a minimal log file? do large log files have any effect on SQL performance?
I am trying to run a query that deletes duplicates records on a table with 24m records. The problem is each time I run it the log file fills up and I get an error saying the log file is full. For this reason the query never ends.
Is there anyway to turn of logging when running a query?
I think it also has to do with disk drive runng out of space as the log file is growing to over 12gb.
Hello, I have decided to use Linq for my current ASP.NET project and so far it has been good, but now I am implementing a system that will allow users to upload binary content such as pictures and videos. For ease of management and security, I have decided to store this content directly in the database. The performance hit is a minor concern because very few user-uploaded images/videos will be seen on any given page (usually just one). From the limited tutorials I have seen on the internet, Linq supports the SQL Server varbinary column through its System.Linq.Binary class. This class does not appear to support STREAMS and instead opts to load all of the contents into memory. This content can then be converted to an array of bytes, which can then be output to the browser via the response stream. This is not good. What if I am sending a video that is very large? Varbinary supports up to 2 GB. I can't have a 2 GB video sitting in memory. It makes a lot more sense to stream it via a small buffer. Obviously, I am going to limit the size of the content that users can upload, but the core problem remains. If I limit content size to 2 MB and I have 2 GB of memory on the server, then I can only serve 1000 users concurrently. In reality, that number would be much less because of other processes running on the server. Is there no way to stream data from a varbinary column with Linq using a small buffer of bytes? Do I need to implement some custom logic on my Linq classes? Since these classes are automatically generated, how would I do such a thing? Thanks.
how do i insert a large chunk of text into a table column. my project is to build a news website. where people can go and read news articles. the articles are provided by the author in word format, so how do i insert that news article into the table's column? any help would be appreciated
I have a table that I'm inserting a file into and using the Image data type to store the binary object. Now the code below works fine for files around 1.5 MB, but anything larger and it's like the code won't even execute and I get a Page Not found error. I'm in the process of running some traces to find out what's going on in the backend, but I'm assuming there's something amiss with my code. The Image data type should handle files that size with no problem but for some reason it isn't. Does anyone see anything wrong? Thanks Dim iLength As Integer = CType(File1.PostedFile.InputStream.Length, Integer) If iLength = 0 Then Exit Sub 'not a valid file Dim sContentType As String = File1.PostedFile.ContentType Dim sFileName As String, i As Integer Dim bytContent As Byte() ReDim bytContent(iLength) 'byte array, set to file size
'strip the path off the filename i = InStrRev(File1.PostedFile.FileName.Trim, "") If i = 0 Then sFileName = File1.PostedFile.FileName.Trim Else sFileName = Right(File1.PostedFile.FileName.Trim, Len(File1.PostedFile.FileName.Trim) - i) End If conn = New SqlConnection(eco) conn.Open() cmd = New SqlCommand("INSERT INTO ECO_Attachments (ECOID, FromType, DocName,OldRev,NewRev,NtLogin,DisplayName, FileName, FileSize, FileData, ContentType) VALUES (@ECOID, @FromType,@DocName,@OldRev,@NewRev,@NtLogin,@DisplayName, @FileName, @FileSize, @FileData, @ContentType) ") cmd.Connection = conn Try File1.PostedFile.InputStream.Read(bytContent, 0, iLength) With cmd .Parameters.Add("@ECOID", SqlDbType.Int) .Parameters.Add("@FromType", SqlDbType.NVarChar, 50) .Parameters.Add("@DocName", SqlDbType.NVarChar, 250) .Parameters.Add("@OldRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NewRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NTLogin", SqlDbType.NVarChar, 100) .Parameters.Add("@DisplayName", SqlDbType.NVarChar, 200) .Parameters.Add("@FileName", SqlDbType.NVarChar, 255) .Parameters.Add("@FileSize", SqlDbType.Real) .Parameters.Add("@FileData", SqlDbType.Image) .Parameters.Add("@ContentType", SqlDbType.NVarChar, 50) .Parameters("@ECOID").Value = ECOID .Parameters("@FromType").Value = From .Parameters("@DocName").Value = DocName .Parameters("@OldRev").Value = OldRev .Parameters("@NewRev").Value = NewRev .Parameters("@NTLogin").Value = NTLogon .Parameters("@DisplayName").Value = DisplayName .Parameters("@FileName").Value = sFileName .Parameters("@FileSize").Value = iLength .Parameters("@FileData").Value = bytContent .Parameters("@ContentType").Value = sContentType .ExecuteNonQuery() '.ExecuteScalar() End With Catch ex As Exception Response.Write(ex) 'Handle your database error here conn.Close() End Try
Here's my delema, I have a file that's 308 bytes wide by 5.7 million records. The record length is fixed and the position and width of the known within the record. When I run DTS I recieve this error Msg MS DTS flat file provide and Err Diesdription: error creating file mapping view: not enough storage is available to process this command. Then when I try to continue with the wizard, it will not allow me to separate the data into the format that I need. Is there any other way to import this file using DTS?
Hi my data files sit in the default directories and I think they are causing my partition to run out of space. I mainly use one db that I created but don't use the others (ie master, model, tempdb, etc). Yet I see their MDF and LDF files are growing. What can I do to shrink them down or perhaps move them off to a larger partition after shrinking?
Hi€¦ During my web search looking for a solution I ran across SQL CE 3.5 articles. My questions about SQL CE 3.5 are: 1) Can SQL CE 3.5 handle a 4 €“ 6 GB file - Read - Parse (SQL) 2) Can SQL CE 3.5 act as a standalone client that a user can view a large (4-6 GB) text file? - Will I need a .NET (small) client to read the large (4-6 GB) text file? More info: The text file will reside on the machine where the SQL CE 3.5 is installed. There is no pull to get the data.
I created a SSIS solution for reading data from dbase and storing them in SQL Server. In a ForEachDirectory-Loop up to one thousand dbase files are read and stored. The system where the packages are running has 16 GB RAM. For the first few hundred dbase files everything goes fine, but then, the RAM seems not to suffice any more and a temp file is created (I changed the path in BufferTempStoragePath).
How can it be that there is a need to create temp files if there is so much RAM available? Why is the RAM filled more and more during the SSIS package execution? Is there anything I can do to release some of it? (it is running in a loop and there is no need to store all the data) Could it be caused by dbase?? (I use Microsoft Jet 4.0 OLE DB Provider)
Another thing is that the temp file is not stored in the path I set in BufferTempStoragePath. There are sufficient permissions set, but temp file is still created in user temp folder...
I need to create script that will import large XML files (500 - 7GB) on a daily basis and store the data in a relational db structure.
What is the best and fastest way of importing such files. I have played around with smaller files and found the following.
1. SSIS XML Data Source: It doesn't seem to like the complex elements types and throws out the file. 2. Using Bulk File Import, sorting the file in XML variable and using XQuery to parse the file: This works but it can't take a file more than 2GB in size, so I can't use this method. 3. C# + XML Serialization: This also works, but seems to be terribly slow. I open the DB connection once, so it doesn't open and close for each db call, but still seems like it takes a long time.
how to import large XML quickly in a relational table structure?
What is the easiest way to get a large fixed width text file (200 columns) defintion into SSIS? To have to define each column with the ruler would be very cumbersome.
I have several databases that have grown to 300 GB and would like to distribute the data into multiple files across multiple drives. Can I create a new database that is spread across the new drives and use a full backup to restore or am I stuck with unloading the data table by table?
I am attempting to restore the database from within VB.NET application I am making the following 3 calls:
RESTORE FileListOnly FROM DISK = 'C:MyDatabase.dat'
USE Master RESTORE DATABASE MyDatabase FROM DISK = 'C:MyDatabase.dat' WITH NORECOVERY, MOVE 'MyDatabase' TO 'C:Program FilesMicrosoft SQL ServerMSSQLDataMyDatabase.mdf', MOVE 'MyDatabase_log' TO 'C:Program FilesMicrosoft SQL ServerMSSQLDataLDFMyDatabase.ldf', REPLACE
RESTORE DATABASE MyDatabase FROM DISK = 'C:MyDatabase.dat'
using SMO. This logic works fine with small *.dat files, however when using *.dat file of about 4Gb I get an error on the 3d restore database call:
ExecuteNonQuery failed for Database 'master'.
An exception occurred while executing a Transact-SQL statement or batch.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Operator aborted backup or restore. See the error messages returned to the console for more details.
ExecuteNonQuery failed for Database 'master'.
An exception occurred while executing a Transact-SQL statement or batch.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Operator aborted backup or restore. See the error messages returned to the console for more details.
The same program/logic also works fine when I use MS SQL 2005 and it runs fine from MS SQL 2005 Query Analyzer for both 2005 and 2000 databases. There seem to be only problem with MS SQL 2000 from within VB.NET. Anybody has any idea? I'd appreciate any response. Thanks
I have a master table containing details of over 800000 surveys made up of approximately 400 distinct document names and versions. Each document can have as few as 10 questions but as many as 150. Each question represents one row.
My challenge is to create a separate spreadsheet for each of the 400 distinct document names and versions containing all the rows and columns present in the master table. The largest number of rows would be around 150 and therefore each spreadsheet will not be very big.
e.g. in my sample data below, i will need to create individual Excel files named as follows . . . "Document1Version1.xlsx" containing all the column names and 6 rows for the 6 questions relating to Document 1 version 1 "Document1Version2.xlsx" containing all the column names and 8 rows for the 8 questions relating to Document 1 version 2 "Document2Version1.xlsx" containing all the column names and 4 rows for the 4 questions relating to Document 2 version 1
I assume that one of the first things is to create a lookup of the distinct document names and versions assign some variables and then use this lookup to loop through and sequentially filter the master table data ready for creating the individual Excel files.
--CREATE TEMP TABLE FOR EXAMPLE
IF OBJECT_ID('tempdb..#excelTest') IS NOT NULL DROP TABLE #excelTest CREATE TABLE #excelTest ( [rowID] [nvarchar](10) NULL, [docName] [nvarchar](50) NULL,
This is my first time posting here, I hope this questions has not been asked before. I tried to search for it but I came not with nothing.
Recreating the error :
I am using VS2005. I created a Pocket PC 2003 project. I have downloaded the SQL Server Compact Edition and installed it. I get the System.Data.SqlServerCe.dll file from the installation directory. I reference to that DLL using Add Reference in VS2005.
Build it. In the Bin folder, a long list of files suddenly appears.
System.data.dll System.data.oracleClient.dll system.web.dll system.enterpriseservices.dll system.enterpriseservices.wrapper.dll system.transactions.dll and the rest of your original files in Bin
The worst of it all, all of these files are deployed into the Emulator! Causing it to run out of memory and unable to deploy.
Something is not right here, I just cannot figure it out! If this happens, each mobile devices can hold one applications. Thats not the way it should be, right?
If you have solved this before, do help. I am at my wits end at the moment.
We are processing 60,00, 000 rows(2 GB file) available in a flat file and loading them in to a database tables using OLEDB Destination components. In the data pipeline of an SSIS package we have 1 flat file source reader, 7 look up components(full cache mode), 1 multicast component and 2 OLE DB destinations with fast load option.
We have observed that first 10,00, 000 rows are processed and loaded in to target tables in just 4 minutes time. The second set of 10,00, 000 rows are processed in 15 minutes time. After this for processing each 1,00,000 rows SSIS is taking approximately 8 - 10 minutes time. We are not able to identify the reasons for the unexpected behaviour of SSIS.
We thought that as the input file size is 2 GB SSIS is not able to manage and slowing down over time of execution. We did split the big input file in to 60 small 37 MB (approx) size files. Then we modified the package by adding For-Each loop task to process all the 60 small files and load them in to database server sequentially. Even in this approach also we have identified data loading has slowed down drastically after processing 13 files.
In order to verify is there any problem with reading source file or transformation, we have replaced OLEDB destinations component with Flat File destinations. With Flat file destination the time taken for processing rows is very constant. For every 8 minutes package is able to process 10,00,000 rows and write them in to the destination files. So, there is no problem with the with either Look up components or flat file source reader.
We are sure that target database server is in same state/condition from the starting to the end of package execution. The client box in which we are running the package is having 1 GB RAM. During package execution time the CPU usage is at 30 % and PF usage is 580 MB. SP1 is also installed on both Client and Server.
Does any one have clue what is causing slow down of data load over the time of package execution?
I'm currently experiencing major problems with SSIS when opening and editing large .DTSX package files that contain Exec DTS 2000 Tasks which have the package data loaded internally. I have no issues if I point the task to a .DTS file, or to an actual DTS package on a SQL 2000 server - but if I load the package internally then once the underlying .DTSX file gets over around 17MB or so in size (which doesnt take long making a few edits to even fairly simple packages now and then), I start to experience major issues with VS/BIDS 2005 crashing randomly when I try to perform any action with the package (open, save etc). Things like OutOfMemory exception errors, followed by the properties of Exec DTS 2000 task being deleted, and also sometimes accompanied by messages about the application not being installed properly.
Again its ONLY when the underlying .DTSX file reaches a certain size limit, and only when I've got an Exec DTS 2000 task with the package loaded internally. I've replicated the issue using several different package files on several different machines (even on servers with lots of memory, fwiw).
Can anyone out there help me with this? SSIS - namely SSIS Exec DTS 2000 package tasks - are our lifeblood at my company and this trend of random and serious crashing on large package files is very disturbing to say the least.
I am very new to SQL, and am basically self-taught(apart from SQL for Dummies and a couple of other books). I am hoping someone can help me with the 'CONVERT' statement.
My system outputs the date format as '12345'. What I have written so far is this;
select Resprj.Re_code, Res.re_desc, resprj.Pr_code, projects.Pr_desc,Res.Re_status1, Projects.active, Projects.Pr_start, Projects.Pr_end from res inner join Resprj on (Res.Re_code = resprj.Re_code) inner join projects on (projects.PR_code = resprj.Pr_code) and Projects.Pr_desc like '%C9%' where projects.active =-1 order by Projects.Pr_code, Res.Re_desc
Could someone please help in regards to using the 'CONVERT' statement to change the date from '12345' to dd/mm/yy.
I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly). When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!! When i start the job, the DTS package is running Very Very Slow!!!! sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem? I Need Help Quick!!! Thank You
I have a huge speed issue on one or two of my SQL Tables. I have included the basic design below.
Structure Id ParentId Name
Group Id ParentId Name Weight
Products Id Name
StructureProducts StructureId ProductId Imported
StructureGroups StructureId GroupId
GroupProducts GroupId ProductId
AnswerDates Id AssessmentDate
Scores <-- This table is the slow table AnswerDateId StructureId GroupId (nullable) ProductId (nullable) Score >= 0 && <= 100
Ok, Structures are the start of everything. Structures, have children. If a group/product is Linked to a parent or child structure then that group/product is visible along the structure tree flow path. Groups, like structure have children. And also like structures, if a group is given a product, then that product is visible through the structure tree flow path.
Example: Earth [Structure] - Asia [Structure] --- China [Structure] --- Japan [Structure] ----- Computer Stuff [Group] ------- Desktops [Group] ------- Servers [Group] ------- Laptops [Group] --------- HP [Product] --------- Dell [Product] --------- Fujitsu [Product] - Europe [Structure] --- Germany [Structure] ----- Berlin [Structure] --- Italy [Structure] ----- Rome [Structure] ----- Venice [Structure] - America [Structure] --- United States of America [Structure] ----- New York [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Washington [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Chicago [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] - Africa [Structure] --- South Africa [Structure] ----- Johannesburg [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Durban [Structure] ----- Capte Town [Structure] - Australasia [Structure]
So the initial steps that happen (with regards to Scoring) are as follows. 1. Insert root score (which would be for a structure, a group, an answer date and either a product or no product 2. Score the next group up along the treeview, using the scores for the groups at the same level as the original group (0 score if no score exists). 3. Continue this till GroupTree is at root (parentid == null) 4. Using the next structure up along the treeview, repeat steps 2 & 3. 5. Continue steps 4 until Structuree is at root (parentid == null)
Example Scoring a product for Johannesburg Acer Laptop would go as follows 1. Initial score for [Acer] product against Group [Laptop] for Johannesburg. 2. Calculate Score for all products (productid = null) against Laptop for Johannesburg 3. Calculate Score for [Acer] product against Group [Computer Stuff] for Johannesburg 4. Calculate Score for all products against Group [computer Stuff] for Johannesburg 5. Calculate score for [Acer] product against all root groups for Johannesburg 5.1. Group [Comptuer Stuff] and [Home Stuff] 6. Calculate score for all products against all root groups for Johannesburg 6.1. Group [Comptuer Stuff] and [Home Stuff] 7. Calculate score for [Acer] Product against Group Laptop for South Africa 8. Calculate Score for all products (productid = null) against Laptop for South Africa 9. Calculate Score for [Acer] product against Group [Computer Stuff] for South Africa 10. Calculate Score for all products against Group [computer Stuff] for South Africa 11. Calculate score for [Acer] product against all root groups for South Africa 11.1. Group [Comptuer Stuff] and [Home Stuff] 12. Calculate score for all products against all root groups for South Africa 12.1. Group [Comptuer Stuff] and [Home Stuff] 13. Calculate score for [Acer] Product against Group Laptop for Africa 14. Calculate Score for all products (productid = null) against Laptop for Africa 15. Calculate Score for [Acer] product against Group [Computer Stuff] for Africa 16. Calculate Score for all products against Group [computer Stuff] for Africa 17. Calculate score for [Acer] product against all root groups for Africa 17.1. Group [Comptuer Stuff] and [Home Stuff] 18. Calculate score for all products against all root groups for Africa 18.1. Group [Comptuer Stuff] and [Home Stuff] etc. etc. etc...
This basicly coveres the concept behind the basic scoring methodology. Now the methodology splits into 2. The first Methodology 1, say it should do these calculations using the Exact same date as the original scored date. (Ie. if i do a score today, only scores on today will be used in the calculations). The other, Methodology 2, says that it should do the calculations on the latest available date. (Ie. If i do a score today, only scores from today and the latest before today will be used in the calculations).
Now to add another problem to this already complex process, is that each Group and each product within a structure can have either of the 2 scoring methodologies assigned to it. Also, products can only be scored against the structures and groups that they are assigned to. Ie, Acer exists in Laptop Group, in Johannesburg or South Africa or Africa, but doesnt exist in New York.
Ok, so now that i've explained briefly how this scoring works, let me get to the heart of the problem. Basicly its speed (can clearly see why), though the speed issue only comes up in 1 Place. And that is where it has to look backwards for the latest available score for the required group, structure and product.
For this to happen i wrote a function ALTER FUNCTION [dbo].[GetLatestAnswerDateId] ( @StructureId INT, @GroupId INT, @ProductId INT, @AnswerDateId INT ) RETURNS INT AS BEGIN DECLARE @Id INT DECLARE @Date DATETIME
SELECT TOP 1 @Date = [Date] FROM [dbo].[AnswerDate] WHERE [Id] = ISNULL(@AnswerDateId, [Id]) ORDER BY [Date] DESC
SELECT TOP 1 @Id = ad.id--gs.[AnswerDateId] FROM [dbo].[Scoring] gs INNER JOIN [dbo].[AnswerDate] ad ON ad.Id = gs.AnswerDateId WHERE [StructureId] = @StructureId AND ISNULL([GroupId], -1) = ISNULL(@GroupId, -1) AND ISNULL([ProductId], -1) = ISNULL(@ProductId, -1) AND [Date] <= @Date ORDER BY [Date] DESC
RETURN @Id END
Now on small amounts of data (1000 rows or so) its quick, though that is due to the fact that the data is minimal, but on large amounts of data this function runs for along time. Specificly in the context of the following when there is 6 months of scoring data (100 000+ rows) to peruse.
SELECT [StructureId], [GroupId], [AnswerDateId], [ProductId], [Score] FROM [Scoring] WHERE AnswerDateId = GetLatestAnswerDateId([Structure], [GroupId], [ProductId], null) AND [StructureId] = South Africa AND [GroupId] = Computer Stuff AND [ProductId] = Acer
Any idea's on how to make this quick? or quicker?
My Current runtime for calculating the 2500 base scores (totals 100 000+- rows) takes 15 hours. Though this is an initial calculation and is only supposed to be done once. Also, this calculations are all correct, so my only issue itself is the speed of the entire process.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
"Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file."
I click Yes and my project open normally. Someone know why this happen? My project is small, have one package with any imports excel files to Sql Server 2005.
Hello! I have a query that join five tables and returns around 45000 rows that takes no more than a minute to execute, in management studio, on a SQL Server 2005, 2CPU 32 bit(dual core), 4GB and RAID5 disk system. The O/S is Windows 2003 sp2 Standard Edition.
When the same query is executed in SSRS2005, with some drilldown and summary of drilldown levels, it never stops to execute.
Looking at the hardware in the performance monitor reveals nothing strange except that % CPU-time is around 40 percent. Memory resources over 2 GB are available all the time.
Any suggestions is appreciated!
Any problems with SQL Server 2005 source database running on SQL Server 2000 compatibility level?
Right, I'm no SQL programmer. As I type this, I have roughly the third the hair I had at 5 o'clock last night. I even lost sleep over it.
I'm trying to return a list of records from a database holding organisation names. As I've built a table to hold record versions, the key fields (with sample data) from a View I created to display this is as follows:
as you can see the record id will always be unique. record 3 is a newer version of record 1, and 4 of 2. the issue is thus: i only want to return unique organisations. if a version of the organisation record is live on the system (in this case record id 3), i want to return the live version with its unique record id. i'm assuming for this i can perform a simple "SELECT WHERE live = 1" query.
however, some organisations will have no live versions (see org with id 2). i still wish to return a record for this organisation, but in this case the most recent version ie version 2 (and again - its unique record id)
in actual fact, it seems so much clearer when laid out like this. however, i feel it's not going to happen this end, and so any help would be greatly appreciated.
I am attaching a database with 3 data files.When I execute "exe sp_attache_db..." I obtain this error:database 'POINT' cannot be opened because some of the files could not beactivated.I have deleted its LDF file.Usually I detach my db, then I delete transaction log, and reattach 3 datafiles...Now it doesn'work!!!!!!!!!!Someone can help me?Thanks.
I have a dtsx package that works fine with one exception. When I open the dtsx package in BI, it gives me the following message:
Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?
When I respond yes, the package opens and I can edit or execute with no problem. Still, I want to understand what could cause this message to occur and, more importantly, how I can get rid of the message. When I try to simply execute the package I still get the same error and it seems this will be a problem for trying to run the package from SQL Server agent.
It seems likely to me that this message refers to the dtsx file (in xml format) itself. Does that make any sense?