I'm trying to run a set of DBCC commands to empty and then delete a secondary log file; however, no matter how large I make the primary log file it won't empty the secondary file. Any suggestions?
DBCC SHRINKDATABASE (VE, 10)
GO
alter database VE modify file (name= VE_log,size = 1200)
GO
dbcc shrinkfile (VE_log2,EMPTYFILE)
GO
Cannot shrink log file 3 (VE_Log2) because all logical log files are in use.
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
------ ------ ----------- ----------- ----------- --------------
17 3 86848 128 86848 128
I have a tab delimited file I need to transfer to a table using SSIS. Columns can have NULL value and there might be extra tabs in each row also. How can I do this? Maybe fuzzy lookup?
i have created a new login in primary server and provided dbowner permission to primary db.how do i transfer this login to secondary server and assign the same permission to secondary db ?
I've inherited a database from a SQL7 system, and converted it to SQL2000. It has a secondary data file (.ndf) and a secondary log file. Because the server configurations are different it's no longer necessary to have the secondary files. How to I merge the secondary file data into the primary files and then delete the secondary files?
I have a SQL Server 2000 database with a primary data file (MDF) and a secondary data file (NDF). I would like to remove the secondary file and only have the MDF. Is this possible?
I am after T-SQL code which will simply load the next T-log backup file from a network share folder to a warm standby db on a secondary server.What is needed is a Third server (server x), to participate in log shipping (MULTIPLE TARGETS).
Primary SERVER (SERVER A) Secondary SERVER (SERVER B) Log shipped to via GUI. THIRD SERVER (SERVER X) which will contain the same log shipped db from server A.
This will simply restore the logs from a network share to keep the db up to date.
I have a database that has been running well for a few years. It has a single data file. It has now become very large and is creaking and running slow sometimes. Is it possible to now create a secondary data file or do i have other options? Many Thanks
i have sql server 2000 db with two data file... primary data file has extension mdf and secondary file has extension ndf (as per microsoft recommendation)..
when i try to backup the db and restore thru the enterprise manager .. in the restore -> options window ... i see both the files has the same extension mdf.. and when the restore completed, the new database still has extension mdf for both the file..
why this behaviour?
* i even try to create a new test db with two files, still its the same behaviour.
We have an encrypted drive (that can be mounted and dismounted, a third party tool to encrypt drive path). I wanted to store the secondary file to that encrypted drive path. The secondary file stores confidential information. I separated the table from the primary to secondary file. Encryption per column is not advisable to do on that table so we decided to separate that table and put it on secondary filegroup. The physical file is stored in the mounted drive path.
I can read and write in that mounted drive path. I can also read and write if the drive is unmounted (which I believe read and write is really being done). When the drive is unmounted, the physical secondary file (.ndf) is not visible to any user logging in the server itself (this is actually the goal why we do this encrypted drive setup thing). It is kept virtually somewhere in the machine. To mount it back, a password is needed.
I'm a bit confuse, somebody can advise or give their insight on this setup. I believe that when the drive is dismounted, SQL Server stored the transactions in cache until it finds that the drive is mounted back. This means that all transactions are not comitted yet. When the drive is mounted back, I think SQL Server is smart enough to check/know that the drive is physically present and will flash all the pending transaction from the cache to the hard drive.
Is my assumption correct? Is there any thing that I need to know about transaction, committed and those data flashing thing on the hard drive?
We are using log shipping and we would like to remove all transfered and applied journal (in the primary box). We have the intentionto use a trigger like this :
CREATE TRIGGER del_log ON log_shipping_plan_history AFTER INSERT as declare @lastfile nvarchar(256) SELECT @lastfile=i.last_file FROM log_shipping_plan_history e INNER JOIN inserted i ON e.sequence_id = i.sequence_id where i.activity=1 begin if IF (@lastfile <> NULL) ... ... remove file (using xp_cmd for example...) ... end
but the problem is that we have only the last file transfered and applied that will be removed (some time, more that 1 file are applied in one shot ... see num_files column in log_shipping_plan_history).
Any solution to remove all the files generated before the last one given by the query ? Any other solutions (sql wizard gives the possiblity to to remove file after a laps of time 1hour, 1day...).
I am looking for the table that contains all the journal files (that we can see when we try to retore a db) ?
I added a secondary data file to TEMPdb yesterday and gave it a wrong location by mistake. If I try to change the location, then I am getting an error now. I think that is because TEMPdb is in use and that is why I cant change it's secondary file's location. Do I need to take TempDB offline and then change the secondary file's location??
I have a database around 500 GB. right now the database have only one data file and one log, it has only one filegroup also.all the indexes and table are placed in Primary Filegroup . we are going to separate them. the planing is to move all the indexes to Secondary filegroup and all the table will be in Primary filegroup.But there will be a problem while implementing it because there are around 600 tables and each table have at least 2 non-clustered index , so is there any way to move all the index to Secondary Filegroup.
I have created a Test SSIS Package within BIDS (VS 2K8, v 9.0.30729.4462 QFE; .NET v 3.5 SP1) that connects to our Test Listener.
There is only 1 Connection Manager Object, and OLE DB Provider for SQL Server.
The ConnectionString lists: Provider=SQLOLEDB.1;Integrated Security=SSPI
The Test Connection within BIDS works.
The Package Control Flow has just 1 Object, and Execute SQL Task that performs an Exec on an SP that contains only a Select (Read).
The Package runs within BIDS.
I've placed this Package within a Job on the Primary Node. Ive run the job successfully using 32 bit runtime on and off. The location of the file on the server happens to be on a share that resides on what is currently the Secondary Node.
When I try to run the exact copy of this Job on the Secondary Node (Which has been Set up for Read All Connections; Yes), I get an error, regardless of the 32 bit runtime opiton. At this point, the location of the file is on the Secondary Node.
The Error is: "Login failed for user 'OurDomainAgent_Account'".
The Agent is a member of NT ServiceSQLServerAgent on both instances, and that account is a member of SysAdmin. Adding the Agent account as well, and giving that account SysAdmin, makes no difference either.
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192. Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name fileid filename filegroup size maxsize growth usage XX 1 I:DataMSSQL.1MSSQLDataNew XX.mdf PRIMARY 68140032 KB Unlimited 0 KB data only XX_log 2 I:DataMSSQL.1MSSQLDataNew XX_log.LDF NULL 1050112 KB 2147483648 KB 102400 KB log only XX_2 3 I:DataMSSQL.1MSSQLDataNew XX_2.ndf PRIMARY 15458304 KB Unlimited 0 KB data only XX_3 4 I:DataMSSQL.1MSSQLDataNew XX_3.ndf PRIMARY 13186048 KB Unlimited 0 KB data only XX_4 5 I:DataMSSQL.1MSSQLDataNew XX_4.ndf PRIMARY 19570688 KB Unlimited 204800 KB data only XX_5 6 I:DataMSSQL.1MSSQLDataNew XX_5.ndf PRIMARY 19591168 KB Unlimited 204800 KB data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its other files.
What more data i need to ensure i submit an analysis to this.
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.
We have set up Log shipping between Primary and Secondary DB. The secondary DB is right now option: Standby/Read-Only. I can not take Backup of Secondary DB now.
Shall we disable Log shipping and change the DB Option to Multi-user mode and take backup? or any different method, without disabling log shipping?
I 've two database db1 and db2. i've made filegroups of db1 using database partitioning now i want to attach those filegroups to db2. i've made the backup of file groups to overcome the error 'the filegroup is in use'. But now while attaching with sp_attach_single_file_db stored procedure from the given path where the backup of db1 filegroups are present im geting error on db2 the 'db2 is already exist'. i m unable to fine any fruitful result after spending days of days in search. Any one plz tell me how to deal with it. im runing out of time so urgent replies is highly appreciated
Hi all, i've given a table structure with data and the expected result . I want to establish it in SQL server (7.0) If i establish the inner join i get 4 rows (2*2) Please let me know how to get the result Thanx in adv Tarriq
This query is part of a larger query that updates a table that holds statistics for reporting. It yields actual Unit per Minute by plant by month. Some of the plants don't produce anything in certain months, so I'm ending up with a Divide by Zero error. I think I just need to stick another CASE statement in for each month, but that seems like it could get pretty ugly.
Any suggestions on how to improve this?
SELECT FL.REPORT_PLANT, [JAN]= SUM(CASE WHEN MONTH(PC.MNTHYR) = 1 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 1 THEN PC.HOURS*60 ELSE 0 END), [FEB]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 2 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 2 THEN PC.HOURS*60 ELSE 0 END), [MAR]= SUM(CASE WHEN MONTH(PC.MNTHYR) = 3 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 3 THEN PC.HOURS*60 ELSE 0 END), [APR]= SUM(CASE WHEN MONTH(PC.MNTHYR) = 4 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 4 THEN PC.HOURS*60 ELSE 0 END), [MAY]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 5 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 5 THEN PC.HOURS*60 ELSE 0 END), [JUN]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 6 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 6 THEN PC.HOURS*60 ELSE 0 END), [JUL]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 7 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 7 THEN PC.HOURS*60 ELSE 0 END), [AUG]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 8 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 8 THEN PC.HOURS*60 ELSE 0 END), [SEP]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 9 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 9 THEN PC.HOURS*60 ELSE 0 END), [OCT]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 10 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 10 THEN PC.HOURS*60 ELSE 0 END), [NOV]=SUM(CASE WHEN MONTH(PC.MNTHYR) = 11 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 11 THEN PC.HOURS*60 ELSE 0 END), [DEC]= SUM(CASE WHEN MONTH(PC.MNTHYR) = 12 THEN PC.TONS * 2000 / PM.EA_WT ELSE 0 END)/ SUM(CASE WHEN MONTH(PC.MNTHYR) = 12 THEN PC.HOURS*60 ELSE 0 END) FROM PRODUCTION_CMPLT PC INNER JOIN FACILITY_LINES FL ON PC.MANUF_SITE = FL.MANUF_SITE AND PC.PROD_LINE = FL.PROD_LINE INNER JOIN PROD_MASTER PM ON PC.PRODUCT=PM.PRODUCT WHERE YEAR(PC.MNTHYR) = YEAR(GETDATE()) AND PM.UOM<>'LB' GROUP BY FL.REPORT_PLANT
This small script seems to eliminate the dupes, but I can't seem to figure out to properly replce the table the with output of the script with all the dupes gone.
select distinct * from dbo.SecurityEventsTest where recordnumber IN (select recordnumber from dbo.SecurityEvents) order by recordnumber
In my query where clause I am using between to get data. Because of time in the data I need to eliminate that and compare, how can I eliminate. my where clause is as below. due to that my query performance is falling down. please help.
CONVERT(DATETIME, CONVERT(CHAR(20), OpportunityDate,110)) BETWEEN CONVERT(CHAR(20),@FDate,110) AND CONVERT(CHAR(20),@TDate,110)
Hi All, I have a SP, which i run inside a for loop. I am running the SP for all the products in a listbox. So for each product i am having the feature extracted through the SP But some features are the same for 2, 3 products. So in the datatable, i am getting the featrues repeated. IS there any way to eliminate the duplicates from datatable, from server side? Hope i am not confusing. Eg: product1 --- test1, test2, test3 product2 --- test2, test4 so the datatable has -- test1, test2, test3, test2, test4 -- i have to eliminate one test2 from this. Any ideas??? Thanks
Hey, I have some field values entries in my database.. that are spaces like ' '. i wanna eliminate them. When i use IS NOT NULL in query it only eliminates the rows with NULL values so how could i modify the query to eliminate the rows with spaces in the field value..
hi, i have null values in my table , i want to eliminate the null values. ie this is my query select p_type from process_general output: 1.BSB HEATER PACKAGE 2.
so in my output one data and one null field is there. so i want to show output with out that null field, becos i am filling this datas in my combobox.so i need with out null field.please give me query for this,pleaseeeeeeeeeeeee
I've been making great progress but I've hit another road block which a newbie intern like myself can't surpass. What's worse is the fact that no one is in the office today! Maybe someone can point me in the right direction with this SQL:
FROMtblUserDepartment ud INNER JOIN tblRequest rON ud.departmentID = r.departmentID INNER JOIN tblDepartment dON r.departmentID = d.departmentID INNER JOIN tblStatus sON r.reqStatus = s.statusID INNER JOIN tblUser uON r.requserID = u.userID LEFT JOIN tblRequestAssignee ra ON r.requestID = ra.requestID WHEREud.userID= @userID
This works great except for one thing. In tblRequestAssignee, you have 1 primary assignee and can have several other assignees (that are not primary). This is denoted by a bit field "isPrimaryAssignee" in tblRequestAssignee. When I run the query, I see every request I want to but it duplicates requests with more than one assignee. What I'm trying to do is make only the primaryAssignee display if there is one. If there's not, then null is displayed (which is already happening).
Like I said, the query is mostly working right except for this duplicate record that displays when there's 2 assignees. Any help would once again be greatly appreciated.
Hello, I need to eliminate the duplicated rows in sql server 2000, but the duplicate is only for some fields of the row. However, I need all the fields of the row. For example, I have the next structure: Id_type, number_type, date, diagnosis, sex, age, city
After many analysis I get many rows where the tree first field are repeated, so I need to leave only one but with the all another fields. This is because I need only the first time when the diagnosis appear.
I have a table with 68 columns. If all the columns hold the same value except for one which is a datetime column I want to delete all but one of the duplicate rows. Preferably the latest one but that is not important. Can someone show me how to accomplish this?