Does anyone know what would cause my log file (.LDF) to grow at a rate of over 1MB per second and quickly fill up the hard drive? I could use a quick answer on this. My experience is in Oracle but I'm assuming you can set the maximum size for a log file for starters? Not sure why it would be growing at this rate anyway though. I could use some quick answers on this one. Thanks!
how SQL handles a full transaction log. I'm running a population of a temp table with a lot of rows (roughly 70M), and when I run dbcc sqlperf(logspace) it says 99% of the tempdb log is full. It doesn't throw an error so is it still working? Does it have to do some sort of disk/memory swapping of the transaction log? Is occasionally filling-up of the xact log ok? Or is it something that has dire consequences for other operations as well?
I'am running SQL Server 2005 Express Advanced Services on a Windows 2003 Server in a hosted Environment. Some times SQL Server is beginning writing entries into C:ProgrammeMicrosoft SQL ServerMSSQL.1MSSQLLOGERRORLOG until the disk is full.. After that, I have to delete the error log file (some GB of size), restart the server and everything is running fine until the log file runs amok again.
I have installed SQL Server Management Studio.
With SQL server 2005 Standard I can configure or disable Error logging in the Management Studio. But with the Express Edition it seems that is not possible.
What I want to do is (maybe with system stored procedures)
limit number of error log files by cycling it, e.g. 5 files and delete the old ones limit the size of one log file. e.g. 100 MB
Is there an option to configure this in the Express edition of SQL Server 2005?
I have SSIS sp2 running on a Win2003 64bit Server with 4processors and 16GB of ram. I am trying to load 1 billion rows of data into 10 tables. The source data is found in 12 different 50GB fixed width flat files stored on 2 different files servers. The destination is 10 different tables in a single SQL Server 2000 database which has 1TB of space allocated to it. I use the MS SQL OLE DB connection for each destination table.
The SSIS package is pretty straight forward. Everything takes place in 1 data flow. The 12 sources each flow through 12 different Row Count Transformations into a single Union All Transformation. From the Union All transformation the data goes into another Row Count Transformation then into a Conditional Split Tranformation. The data is split into 10 streams base on the last digit of one of the ID fields in the data. The 10 streams are fed to the 10 destination tables.
Every time I run the package (Start without Debugging) the avaible physical memory goes from around 15GB to 0 in about 2 minutes. The % comitted bytes in use goes from 5% to 100% in about 5 minutes. Once at 100% it will stay there for around 5 minutes before it will finally give me the following error message:
The system reports 98 percent memory load. There are 17178939392 bytes of physical memory with 189382656 bytes free. There are 8796092891136 bytes of virtual memory with 8742748930048 bytes free. The paging file has 54388109312 bytes with 16056320 bytes free.
This message is followed by a bunch of other messages:
SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Union All" (2073) failed with error code 0x8007000E. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
SSIS Error Code DTS_E_THREADFAILED. Thread "WorkThread1" has exited with error code 0x8007000E. There may be error messages posted before this with more information on why the thread has exited. The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Dr 2" (663) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
...
SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Dr 3" (898) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
SSIS Error Code DTS_E_THREADFAILED. Thread "SourceThread2" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited.
I have tried adjusting the Engine threads down from 5 to 4 to 2. I have tried adjusting the FastLoadMaxInsertCommitSize from 1000000 to 100000 to 1000 (Destinations are tablocked and Check Constraints). I have tried moving the DefaultBufferMax up to 16500 and down to 2000.
Nothing works. The package fails everytime within 20 minutes of its start.
I would prefer not to have to rewrite the package and process each file sequentially as that would take forever.
In my case I have to log the errors raised by any task in a package to either windows event log, text file or SQL server. Also I need to send an email notifications to a group of people telling them about the error.
Now can I use SSIS package logging for logging the errors into the required destinations. I mean right clicking on the package and selecting Logging, then adding the required log providers and enabling the events for logging into those. I think I have to upfront select the log providers to log the error, I will not have the liberty to log the error to the destination, the name of which is passed as a variable to the package. This is okay with me though.
Now what will a custom log provider help me to do in this case. Also can I somehow configure my package to call the send mail task everytime an error is raised.
Also, one more option can be developing a package that only does the error handling. It will take in the paramters or the error codes and descriptions, the destination to write to and a flag to send mail or not for that particular type of error.
I currently have a SQL backup process that backs up my databases via the network to a backup hard drive on a separate system. I recently began getting strange issues with my backup process in which it continually writes to the backup drive until the drive fills up and then the job fails. I also noticed that when I kill the job on the host server, the backup file drops to the normal file size. The normal file size is 300 GB but it has grown to over 400GB. I looked at various logs and even performed several backup tests with success.
I am trying to figure out if this is a known SQL Server issue or an issue with the OS?
I have this query I need for a report. Originally it was 4 queries to be used in Crystal Reports. Now I want to create the same report with SSRS and therefore I incorporated all queries in one in order not to use subreports [URL].....
Tempdb fills up to nearly 90 GB. I am running SQL Server on a local box, so I am sure there is no other traffic. Here is the query:
SELECT AdHaupt.NSprache_ID ,AdHaupt.mengentext AS mengentextHaupt ,AdHaupt.Einzelpreis ,AdHaupt.Anzeigebezeichnung ,AdHaupt.Gesamtpreis
[Code] ...
I ran it with TOP 10 as well, just to see if it will finish at all, but it never did (ran for an hour now).
Hi,The tempdb file on one of our servers grew very large and used allavailable disk space. This is SQL Server 2000 SP4. I have installedhotfix version 8.00.2187. I opened a profiler trace but can't still getto the root of the problem. Any help will be appreciated.Egbon*** Sent via Developersdex http://www.developersdex.com ***
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID FROM ClaimsReceivedPayors WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING FROM INVOICECLAIMMAPPING JOIN CLAIMSRECEIVED ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices FROM ClaimsreceivedPayorServices JOIN ClaimsReceivedPayors ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsReceivedPayors FROM ClaimsReceivedPayors JOIN ClaimsReceived ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
the logs on our database keep locking up and return and file size of 0. Does any know of a way to dump the log file. I tried: Dump Transaction log to .... but it did not want to work any ideas.
I'm running a SQL Server 2005 Express installation. Now I notice that the SQL Server logs (under Management in Mgmt Studio Express) are empty - no logs are in there, neither under Current, nor under the "archived". Do I have to switch on logging explicitly?? (I'm logging on as "sa".)
Hi,I'm using SQL Server Management Studio Express and I have made a website that uses a database with stored procedures.When running pages in my webpages it cost around 2 seconds to load each page.Thanks too long!! But where is the problem???I like to see a list of all executed stored procedures with there execute time (for each page).In this way I can check if the problem is here.How can I get a logfile like this?Thanks!
What is the best way to clear the transaction logs. My backup job each night is ending because it says it is running out of disk space and I need to clear up the transaction logs. Any help is appreciated. I see many different options (trucate option, auto shrink, etc.), just need some assistance tosome more specific best approaches. Thanks