I wanted to remove an extra transaction log file that was no longer required, and ran the following against the database...
DBCC Shrinkfile (DB_Name_log2, Emptyfile); go alter database [Db_Name] remove file DB_Name_log2; go
I got a successful removal message. But if I go into the properties of the database, and click on files, it still shows up. Why is this and how can I get rid of it?
I setup SQL Server 2012 on Windows Server 2012 with the service accounts in the local Administrator group, but now that I'd like to remove the accounts from this group I'm finding they don't have the appropriate access to the network storage. notes on setting the per-service SID's for SQL (SQL Engine, Analysis Services, Reporting Services, and Agent Service) so they can read the Data, Log, and TempDB mount points?
I have a query that I'm filtering using Customer ID, CustomerID = '12345', even though I need the query to filter that data, I don't need to see that column in my results. I tried removing it from my Select Distinct group but I'm guessing it needs to be there or the filter won't work(like I said, very green). Is there something that I can add to hide this column?
I need to pull all records from the Item table and then I need to populate the most recent OrderNo and O.DateCreated. I got this far but if there is a part in the item table that does not have an order against it, I do not get a value and my goal is to see any parts that have not been ordered in the last year. Something like this:
SELECT I.PartNumber, I.Description, I.DateCreated FROM item I CROSS APPLY (SELECT TOP 1 O.OrderNo, O.DateCreated FROM Orders O WHERE O.PartNumber = I.PartNumber ORDER BY O.DateCreated DESC) O PartNumberOrderNoO.DateCreated 1A1XXX 1CHXX1 1/8/2014 2A2XXX 1CHXX3 1/20/2014 3A3XXX NULL NULL 1B1XXX 2CHXX1 2/10/2014 2B2XXX 2CHXX3 2/22/2014 3B3XXX NULL NULL
I have a query which contains 12 left outer join. I remove some of the joins that don't have parameters. The result is coming same but usually when we remove joining it should take less exec time but for me it is taking more time. What could be the reason?
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT * FROM ( select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type ,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo ,n.note FROM ih_cml_log_entry cml
I usually do this through Access so I'm not too familiar with the string functions in SQL. My question is, how do you remove characters from the middle of a string?
Ex: String value is 10 characters long. The string value is X000001250. The end result should look like, X1250.
I've tried mixing/matching multiple string functions with no success. The only solution I have come up with removes ALL of the zeros, including the tailing zero. The goal is to only remove the consecutive zeroes in the middle of the string.
I have a varchar field which contains some Greek characters (α, β, γ, etc...) among the regular Latin characters. I need to replace these characters with a word (alpha, beta, gamma etc...). When I try to do this, I find that it is also replacing some of the Latin characters.
I have a sql server 2012 server and I need to prevent the users from creating new schemas by mistake. Is there any way to revoke that permission alone but still letting the user to create their own objects in dbo (yes I know that shouldn't be in dbo but that is another issue).
I've inherited a database from a SQL7 system, and converted it to SQL2000. It has a secondary data file (.ndf) and a secondary log file. Because the server configurations are different it's no longer necessary to have the secondary files. How to I merge the secondary file data into the primary files and then delete the secondary files?
We have a multi-site AG and are demoting one of the remote sites out of the AG. In doing so, we've discovered that no logic exists to remove an IP address from the listener.
It seems that the ALTER AVAILABILITY GROUP MODIFY LISTENER logic lacks functionality other than ADD. [URL] ....
I'm afraid to just remove the IP address from the cluster object as that IP is also stored in the HADR systables.
select * from sys.availability_group_listener_ip_addresses select * from sys.availability_group_listeners
I am parsing a directory of flat files and looping through it with a foreach loop. Some of the files have lines that contain characters that I would like to remove. In fact, it would be good if I could remove the entire line. Is there a way to do this with a Script Task or some other way.
I have a file I'm pulling from another type of database into an Excel spreadsheet and then using my dtsx package to import the spreadsheet into my SQL database. The problem I'm having is that one of the fields coming out of the database to the spreadsheet has the thousands seperator in the field that I want to use as a numeric field without the ",". Right now I have a macro that I run on the spreadsheet to reset the field to straight numbers without commas before importing it, but would like to configure my Integration package to do it automatically.
I have a large (420GB) database that has never had data archived off before. I taken a backup to a test server and run a script supplied by the product vendor which has removed a large ammount of old data no longer required.
I have checked within enterprise manager that this data has now gone, however the actual file itself has not shrunk in size. Is there a further step I need to take to get back the space.
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
I need find out the number of columns in flat file before i process that particular file.I have file name in @filename variable and file path is @filepath variable.But do not not that how i will check the column name in before i will process that file.
@filePath = C:DatabaseSourceFilesCAHCVSSourceFiles And i am using for each loop container to read the file one by one and put the file name in @filename variable.and my file name like
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file.if it is not there then i have to send mail to client saying that file is not valid.I need to also calculate the size of flat file.
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
I have a filetable that contains a binary file. I need to do a selective read of the file stored in the file table. I can write a C# CLR function that will open the file, read n bytes the from a starting byte. Or I can write a SQL statement that reads the stream in the filetable into a VARBINARY variable using SUBSTRING beginning at the starting byte (offset from 1) for the same n bytes.
Both give me the same result. However, the SQL statement takes considerably longer to read. I know there is overhead in reading through SQL (interpreted language), but the difference in performance is substantial, and I can only attribute this performance degradation if SQL first tries to "load" the entire stream before it identifies the portion of the stream that it needs to read beginning at the starting byte offset.
I wonder if this is the case or if there is another option to read a stream from a filetable directly through SQL queries that is more efficient.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History' MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
I am trying to create an ssis package with dynamic csv file as output. and out format contains query output.
sample file name:
Unique identifier + query output + systemdate();
The expression is looking like this.
@[User::FilePath] + @[User::FileName] + ".CSV"
-- user filepath is a variable from ssis package. File name is the output from SQL query. using script task i have assigned the values to @[User::FileName] .
When I debugged the script task the value getting properly but same variable am using for Flafile destination. but its not working.
I have created the file group for my database.First i took backup of individual file group(mdf and ndf) then I tried to restore only secondary(ndf) file group.I got error like
Restore failed for Server 'pcnameSQLEXPRESS'. (Microsoft.SqlServer.SmoExtended) File 'regSQL_dat' was not backed up in file 1 on device 'D:vtndf.bak'. The file cannot be restored from this backup set. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3144)
When i tried to restore only primary file group i got the same error.
Can i restore individual file group? I
For the purpose of data archiveng,i have taken back up of ndf file (it contains very old data) & i have removed this file from database.Now my customer asking these file data.Now i have to again attach/restore this ndf file.how to attach/restore.
I've run it for ih and ih_restore and can see that the "reserved" and "data" fields are growing but no extra rows - so no inserts are happening in the database?? What or why will this be happening.
Example of csv file of a table that I've exported -
From ih - name,rows,reserved,data,index_size,unused em_comm_costing,384191,1011704 KB,512424 KB,498648 KB,632 KB From ih_restore name,rows,reserved,data,index_size,unused em_comm_costing,384191,119808 KB,62960 KB,56088 KB,760 KB
So the em_comm_costing rows are 384191 in both but the data field has increased to 512424 from 62960.
The database is being mirrored as well, but not sure if that would be effecting the size?
Error:- (1 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator. Msg 5042, Level 16, State 1, Line 1
The file 'tempdev1' cannot be removed because it is not empty.
Note: =>I restarted SQLServer from SSMS and then ran same commands mentioned above ,......and getting same error... => I executed above commands and restarted services...no change...