SQL Server Admin 2014 :: Cannot Open Data File When Running Agent Job
Apr 30, 2015
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
I created a SSIS package which will generate an output file and place it on a remote fileshare location which will look something like this
\RemoteServerNameRemoteFilePath
The package is executing fine when I am executing it through BIDS or through execute package utility and writing the output file to remote file share location.
I created a SQL job for the package and ran the Job. Then, its throwing an error saying
Executed as user: DomainUser. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:33:06 AM Error: 2008-03-10 10:33:22.22 Code: 0xC020200E Source: DFT_Generate Output File Description: Cannot open the datafile " \RemoteServerNameRemoteFilePathOutputFileName.txt". End Error Error: 2008-03-10 10:33:22.34 Code: 0xC004701A Source: DFT_Generate Output File DTS.Pipeline Description: component "FF_DST_Output" (160) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:33:06 AM Finished: 10:33:22 AM Elapsed: 15.891 seconds. The package execution failed. The step failed.
DomainUser have all the permissions on the remote file share location. SQL server agent is running with the log-on account DomainUser(same as the above).
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
I'm using SQL accounting software now and i have a problem with my designer report. When i using designer report to design my customer statement of account, after i save the new design, i haven't rename it for the new statement report so the name there empty and i exit the designer report. So when i re-open the designer report, suddenly pop out "field value required". What should i do...? How can i re-open the designer report again?
I am trying to setup querying Active directory from sql for the first time.
We are running on windows server 2012 and using sql 11.0.2100.60. Have tried the following
sql is on sever dev AD is on sever DO
EXEC sp_addlinkedserver 'ADSI', 'Active Directory Services 2.5', 'ADSDSOObject', 'adsdatasource' GO
[Code] ....
I get the following error when I try and query
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query "SELECT name FROM 'LDAP:// xxxx.internal' WHERE objectCategory='Person' AND objectClass = 'contact'" for execution against OLE DB provider "ADSDSOObject" for linked server "ADSI".
We carried out an in-place upgrade on our production server on Saturday - going from 2008 R2 to 2014.
We had tested this method out in dev/test and pre-production with only minor post issues to fix.
However, on production we had an issue whereby checkdb was hitting 100% CPU and caused overnight processes to hang. The checkdb statement was terminated and disabled by a colleague at 1 am.
Since then we have restored this database to a dev server and ran checkdb against it with no_infomsgs and all_errormsgs but it still hasn't finished since Monday morning!
The database is just over 800 GB and whilst checkdb was crippling the cpu, logical reads are less than one. However, sp_whoisactive is showing that it has done 56 million reads so far and this number increases periodically so it looks like it keeps going back to re-check the database with a deep dive.
Also, on a different environment, we ran check table statements and one of them took over 9 hours for a single table but came back clean (see attachment).
We need to wait for the output but the database is still in use in production and the mess will just get worse if it is indeed corrupted.
I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
I have a SQL Server 2008 instance that is running on "LiveServer" our production database (ProdDB) - and we need to upgrade to 2014. In order to do some upgrade testing, I spun up a VM with the same version of SQL server on the test VM (TestServer), did a backup of the production DB from the live server, and restored it to TestServer under a different name (ProdDBUA).
I then installed SQL2014 Upgrade advisor onto TestServer, and ran it, checking all the boxes (reporting services etc..) and it all came back clean - no issues whatsoever - not a single warning even. I'm under the impression that stored procs/functions etc... all reside within the DB, so a backup will include those. Is that correct?
The problem is, I know I have stored Procs, functions and views that use deprecated joins in that LiveServer.ProdDB. What do I need to do/configure/check in order to make sure that the Upgrade Advisor is actually checking through all that T-SQL that has deprecated code? I want to have a list to give to my report writers of procs/functions/views that need to be rewritten prior to the upgrade going live.
If there is a modification that needs to be run on the TestServer.ProdDBUA, a cursor to change the path etc. DB is running in Compatibility mode 90.
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
A log file size of a production database has been increase from 4gb to 150 gb initial size.Now i want to find when it will grow & how much it grow & which transaction is responsible for this.
I have a database it is 50 gb with hundreds of columns. I would like to choose a certain column and convert the data in it to .csv or excel file. How can I do that I am very new to MSSQL...
I had to reinstall my local copy of SQL a few weeks ago, which naturally overwrote the
msdb.dbo.sysmanagement_shared_server_groups_internal and msdb.dbo.sysmanagement_shared_registered_servers_internal tables.
However I still have the local XML file that SSMS reads so I can still access the groups, I just get weird errors when trying to re-register my install as the new CMS. How to rebuilt those tables from the XML file or know of a way to repopulate?
We have one table where we store all documents in one of the column called "Doc" with varbinary(max) data type.
We want to download those documents from sql table to windows explorer and i wrote BCP in sql 2005. And things were fine.
The format file I used there looks like this,
9.0 1 1 SQLBINARY 0 0 "" 1 Doc ""
Now we are in 2014 and when I try the same code with same format file, it hangs in the middle. So I changed the file to 12.0 instead of 9.0 but still not working.
is there a way to restore all file groups except one? example: Database A has 10 filegroups, but 1 of them is defunct, so i cant delete it and there's no backup for restore it.Can I create a new DB restoring the 9 good FGs from a database A's backup?
I have a windows 8 pc that I just got and installed sqlexpress 2014. My buddy haw windows 7 and installed sqlexpress on his pc. We create a db on his pc, did a backup, copied the backup to my pc. In ssms I right click on "database" > restore database. click device and the button to find my file. I navigate to the folder where the file shows in file explorer but the .bak file does not show in ssms to restore from. This is probably a windows thing but I have don't know what to look at.
While i execute dbcc sqlperf(logspace); I get following values.
Database NameLog Size (MB)Log Space Used (%) master 16.17969 13.30275 tempdb 7.429688 61.7245 model 0.7421875 45.78947 msdb 5.554688 25.87904 distribution 2808.93 0.8172179 BANKDB 23438.87 48.20037 WSMIRSDB 109.7422 4.839111
For database BANKDB , Log Space used(%) is 48.83% and Log size is about 23438.87 where as my database size of BANKDB is 60 GB. FULL database and Log back is done every day night one time. My database is performing slow now.
Do we need to take log backup frequently like once a 1 hour so that Log space used will be less. Same query is taking more time to execute than before in same database is it because of log file has increased.
I do index organize and rebuild once a week and stats apply nightly.
Is it correct once log space size is increasing more than 10%. Do we need to take log backup?
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I have installed SQL 2014 (Evaluation Version) on testing machine. We want to import some excel files on database. I manually created one Test Database and now trying to import excel file. Import completed successfully but I am not able to see any table created as result of Import. I tried it 3-4 times and even restarted sql services but no luck.
DECLARE @File_Exists INT EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupfile.bak', @File_Exists OUT print @File_Exists 1
And if check folder, can use "nul", but it doesn't work for UNC path
DECLARE @File_Exists INT EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupul', @File_Exists OUT print @File_Exists 0
If use xp_subdirs like:
EXEC master.dbo.xp_subdirs 'serverBSQL_Backups'
If the folder doesn't exists, Msg 22006, Level 16, State 1, Line 3 xp_subdirs could not access 'ServerBSQL_Backups*.*': FindFirstFile() returned error 67, 'The network name cannot be found.'
how to check if UNC folder exists in Backup? in my code I want to check if the unc folder exists before doing backup, the unc path is retrieved from other table or backup history.
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
Hello, I have created a package that runs without problem. I run the package with the command dtexec /F "package_name.dtsx" > package_name.txt.
Then I run the same package from SQL Server Agent, everything is OK
Then, I tried to edit the command line to have the output file, but I got an error.
The command line is: dtexec /F "package_name.dtsx" MAXCONCURRENT "-1" / CHECKPOINTING OFF /REPORTING E > package_name.txt. (MAXCONCURRENT "-1" / CHECKPOINTING OFF /REPORTING E are created by default)
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql