I need to create a new db from a backup file of an allready existing database (vehicledb) on this server.So i created a new database with name "vehicledb2" .But when i try this, the backup says that this file is used...The backup set holds a backup of a database other than the existing 'Vehicles2' database. Restore of database 'Vehicledb' failed.
I have used the copy database wizard, but I realized I had forgotten to shrink the transaction log file. So I canceled the wizard. My database, detached by the wizard, has now disappeared. The mdf file is still there, but when I try to attach it manually I get the "create file encountered operating system error 5 while attempting to open the physical file..." error.
1. Flat File Source 2. Conditional Split, Case Good = !ISNULL(KEY) Case Error = ISNULL(KEY) 3. Case Good -> Writes to Good Flat File (with timestamp in the title) 4. Case Error -> Writes to Error Flat File (with timestamp in the title)
Most job runs have no errors but the error file is created as a zero byte file anyway. If there are no error records I don't want the error file created. How might I accomplish this?
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
I need find out the number of columns in flat file before i process that particular file.I have file name in @filename variable and file path is @filepath variable.But do not not that how i will check the column name in before i will process that file.
@filePath = C:DatabaseSourceFilesCAHCVSSourceFiles And i am using for each loop container to read the file one by one and put the file name in @filename variable.and my file name like
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file.if it is not there then i have to send mail to client saying that file is not valid.I need to also calculate the size of flat file.
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
I have a filetable that contains a binary file. I need to do a selective read of the file stored in the file table. I can write a C# CLR function that will open the file, read n bytes the from a starting byte. Or I can write a SQL statement that reads the stream in the filetable into a VARBINARY variable using SUBSTRING beginning at the starting byte (offset from 1) for the same n bytes.
Both give me the same result. However, the SQL statement takes considerably longer to read. I know there is overhead in reading through SQL (interpreted language), but the difference in performance is substantial, and I can only attribute this performance degradation if SQL first tries to "load" the entire stream before it identifies the portion of the stream that it needs to read beginning at the starting byte offset.
I wonder if this is the case or if there is another option to read a stream from a filetable directly through SQL queries that is more efficient.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History' MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
1)When we create Indexes, key columns are the columns that use in where clause and included columns are the columns that can be used in the select list and on join clause column.
2) I am thinking that we have to create new Index, only if we found at least 50 msec time save.
ALTER TRIGGER [dbo].[Trigger1] ON [dbo].[Table1] with execute as SELF AFTER INSERT
[code]....
I am trying to create a trigger so every time a entry is made on a table, and the Colum1 is 'entry', it starts a job. But the users running the inserts do not have permission to Start jobs so I need to make it run as a super user. Where do i put the syntax in here? I Have tried Execute as login 'superuser' before the exec statement but it errors on the principal not being valid
THe Scenario is We have Tables like parent and Child Table.
Like we have Child Table as Name AcademyContacts,In that we have Columns like
Guid(PK)Not Null,
AcademyId(FK), Not Null,
Name,Null
WorkPhone,Null
CellPhone,Null
Email Id,Null
Other.Null
Since we have given Null to ''Workphone'',''Cellphone '', ''Email ID''.when inserting the data into these table if the particular columns are empty while inserting also the data will get populate into the table.And I need is if these columns are ''Workphone'',''Cellphone'' , ''Email ID'' they cant insert the data into table.Like it must trigger like ''Please enter atleast one of these ''Workphone'',''Cellphone'' , ''Email ID'' columns.
I'm trying to create a column of numbers that increment by one.
I'm not able to use a #temptable in the application I'm using so I cannot use IDENTITY(int,1,1).
I want to add an Id column to this query:
Select distinct sd.name,ic.TABLE_SCHEMA,ic.TABLE_NAME from sys.databases sd cross join INFORMATION_SCHEMA.COLUMNS ic where sd.name = 'ODS1stage' order by TABLE_SCHEMA,TABLE_NAME
How can I accomplish this without creating a temp table? I would just alter the table and insert the numbers but there are 2000 rows.
I have a update trigger. In this trigger I need to insert few records in 3 tables. If error comes in any of these inserts then previous inserts to get committed. This trigger was written in Sybase and it was possible to create transaction and commit the transactions.
I need to give the below script which contains CREATE CREDENTIAL query to an app team.
CREATE CREDENTIAL crdntl WITH IDENTITY = '<service_acct>', SECRET = '<pwd>' GO
My concern is i don't want the password to be visible. Basically i want to use this credential to create a proxy which is then used to run SQL Agent backupjob on number of SQL servers. Also, i cannot leave the SECRET value as blank (as the MSDN suggests.)
Is there any way to mask the password OR any other alternative solution.
I am trying to create a failover cluster without the log shipping in 2012 as i've done it with a static instance with some database.Is the "AlwaysON" feature the solution when an application creates random and numerous databases within the instance and we need a failover scenario ?
i didn't figure out how to add a firebird linked server via ODBC
my system :
win 7 pro 64 SQL server 2012 express 64 firebird 2.5.3 64 Official ODBC drivers 64
i have created the odbc source without problem, and checked that in visual studio and ssms import data wizard , both show me the data from firebird database but when i try to create a linked server , i have something like this message (the image is from the web)