SQL Server 2008 :: Paths Not Well Defined For Databases
Oct 29, 2015What are the potential risks of keeping data , log , tempdb backups etc in the same drive although in different folders ?
View 4 RepliesWhat are the potential risks of keeping data , log , tempdb backups etc in the same drive although in different folders ?
View 4 RepliesI am creating mateialized view but it is failing with error that it can't be schema bound.
The query I am working to create materialized view are having joins with different tables and function.
Is it possible to create Indexed views on user defined functions?
I have around 600 databases in my server, a user need select access of all the databases. will i have to go one by one in all the dbs and create that user and give datareader role to him. or is thr any shorter way to do so????
View 8 Replies View RelatedI have some questions :
1.Is it OK to shrink Master, model ? the transaction log for those databases are full .
2.Is it OK to set the Recovery model of Database MODEL as SIMPLE ? at the moment it is in FULL recovery model and the transaction log is Full too
Script to find a value in all databases ? for example I want to find a value contain word “ SQL” in all databases so I will get the information in which database and which table it exists.
My script is only for finding a value in one database.
I have log shipping enabled on databases(primary and secondary) and works fine. I need to implement TDE on the database. I have experience on implementing TDE on databases which are not used for log-shipping.
What are the steps needed to setup TDE which are involved with log-shipping.
I would like to migrate around 15 databases in production server to the new production server. The biggest database is 80 GB .
Wondering is there any fastest way to do that with very low downtime ?
What I can think about is shrink databases files and perform attach – detach .
Find unused databases in a instance or when last used or accessed?
I'm on SQL SERVER 2008 R2 64bit -enterprize
I need to find when the databse is last accessed.
I have a new problem with doing a restore of a number of databases using powershell. The script I'm using is based mainly on this one (Part 2 in particular): [URL] .....
The problem I'm having is around the RedgateGetDatabaseName function. My hunch is that its down to the different version of red gate and how sqlbackup works. Basically when the call is made to the function it is returning both the Database Name and the number of row's that the SQL command in the function has ran. I've tried to include a SET NOCOUNT ON at the start of the SQL command in the function but its still returning the now count.
Currently our database size is around 350G. It will grow up to 1.5 TB
We have the
Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False
at database level
we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.
Previously they tested with sampling but instead of full scan running with the sampling effected the queries.
Is there option to avoid the long time job duration.
If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases
I need to generate data models of all databases for an instance. How can I accomplish this?
View 3 Replies View RelatedI have 2 databases: one for our intranet & another for our internet website.In the intranet database, we have a table called "Clients". A new client can be added in one of 2 ways:
1) One of our employees manually adds the client via the intranet
2) A new customer subscribes to our services via our website, thus becoming a new "client" (when subscribing online, we also add a record to an "Accounts" table located in our internet website database, but we'd also like to add a record to our intranet's Clients table as well).
In the client table is a field called "CreatedBy" which expects the UserId of whoever created the account. Again, this UserId can belong to either an employee via the intranet or a new customer via our internet website. how do I distinguish where the UserId in the dbo.Clients.CreatedBy field is coming from?
My 1st thought was to append a negative to the CreatedBy UserId# if it came from a customer via the website, but that just seemed too quirky (i.e. if an employee added a client and his UserId was '10', the Clients.CreatedBy would be '10'. If a customer subscribed via the website, the Clients.CreatedBy would be '-10'. Where customers can once again sign up online. So, I need a way to keep these databases separate but keep track of which db a user's coming from when we create a new Client record.
I am working on a task. Currently we are taking a database backup and keeping that backups in a folder. The backups doesn't have time stamp on it. My task is need to get the latest backup and copy that backups into some other server and then restore the database from there.I am planning to create SSIS package.Do we need script task for this task.How to get the .bak with latest create or moidified date. For now we doesn't have timestamp so need to go based on modified date?
View 9 Replies View RelatedWe have two databases with same schema and tables (same table names, basically main DB and a copy of the main DB). following is example of table names from 2 DBs.
CREATE TABLE #SourceDatabase (SourceColumn1 VARCHAR(50))
INSERT INTO #SourceDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6')
SELECT * FROM #SourceDatabase
DROP TABLE #SourceDatabase
CREATE TABLE #ArchiveDatabase (SourceColumn2 VARCHAR(50))
INSERT INTO #ArchiveDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6')
SELECT * FROM #ArchiveDatabase
DROP TABLE #ArchiveDatabase
We need a T_SQL statement that can create one view for each table from both the databases(assuming both databases have same number of tables and same table names). so that we can run the T_SQL on a thrid database and the third DB has all the views (one view for each table from the 2 DBs). and the name of the view should be same as the tables name. and all 3 DBs are on the same server.
the 2 temp tables are just examples, DBs have around 1700 tables each. so we ned something like following for each table.
CREATE VIEW DBO.TABLE1 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE1] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE1]
CREATE VIEW DBO.TABLE2 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE2] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE2]
CREATE VIEW DBO.TABLE3 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE3] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE3]
CREATE VIEW DBO.TABLE4 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE4] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE4]
CREATE VIEW DBO.TABLE5 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE5] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE5]
CREATE VIEW DBO.TABLE6 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE6] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE6]
im trying to measure the amount of CPU used in a backup and restore, using the SQL Server Databases: Backup Restore counter in Perfmon.
I can select The counter, but it does not let me select an instance database to monitor.
see image:
I have 3 instances on my machine, 2 sql 2005 and 1 sql 2008. why do these instances not appear? i dont know how to set up this monitoring.
if i type LOCALHOST or 127.0.0.1 in the "Select counters from computer" text box i can see all databases under my sql 2005 instances, but not the ones under the 2008 instances. I have disabled all sql server 2005 services and can confirm none of the sql 2008 instance databases are showing up.
this is a huge problem, especially for when i start stress testing sql server 2008 in preperation for upgrading.
anyone any ideas on why i cannot see sql server 2008 instances in perfmon? I have tried from a remote pc also with the same results.
Is there a query or sp that I can pull all databases on the server along with their logical file names and physical file names?
View 5 Replies View RelatedI was contacted by the SAN team to test backup/restore of larger databases using a split-mirror backup (BCV) or clone that is taken from production db server and copied to another sql box. They want to use this process once a week. I see the mounted drives with the data/log files. All looks good. Initially I attempted to attach the databases and received (Unable to open the physical file db.mdf Operating System Error 5 Access is denied). I manually granting SQLServerMSSQLUser$<computer_name>$<instance_name> on all of the physical files 20 total. That worked.
Since this will be weekly, the SAN team performed the copy again and now none of the databases can communicate with the newly copied files. NTFS permissions need to be set again. I'm getting (Operating System error 21: the device is not ready). Is there something that I'm missing in this process how the vendor BCV clones the data and SQL communicates with the copied files as I was thinking it would be more automated process?
Is there an easy way to compare the contents of objects between 2 different databases? For example, say I had 2 databases, My_DB_1 and My_DB_2, each with a SEC_User table. Say I wanted to do an object-to-object comparison between databases to see if there were any differences. Here's some sample SQL:
use My_DB_1
select * from sys.dm_sql_referencing_entities('dbo.sec_user', 'object')
use My_DB_2
select * from sys.dm_sql_referencing_entities('dbo.sec_user', 'object')
Say that the result sets above both returned a SEC_GetUser sproc object ref. Is there a way to write SQL that will compare the SEC_GetUser sprocs (and other objects in the above rowsets) from both databases? For example, if My_DB_1.SEC_GetUser returns an extra column in the result set than My_DB_2.SEC_GetUser then I'd like my comparison SQL to return a single column "IsDifferent" with SEC_GetUser as a row....
I am working towards automating the process of testing our backups. For the meantime, I do it all manually - I copy the backup files (full + transaction logs) to our test server and then run the restore script. Once database restored I run the DBCC CheckDB. The results of checkdb I manually upload to our Sharepoint portal as proof that the backup file is intact with no errors.
here are some ideas I have but have not yet tested:
Create a maintenance plan with each 3 jobs:
--> Powershell script to copy the files from Prod server to Test server - add this scrip to Job1
--> Powershell script to restore databases files - add this script to Job2
--> Run the DBCC in powershell (yet to find if possible in PS) - add this script to Job3
I would like to use seperate jobs as to get a report on the duration and status of each job
Would also like to get the results of the DBCC Checkdb as proof that no errors were found for upload to our Sharepoint portal. Dont know if possible via the job.
How to restrict resources usage based on individual Databases in resource governor?
We have many databases in one instance; I would like to restrict resource usage to each database respectively.
I created 2 pools as pool_login, pool_DBNAME, and 2 workload groups as GroupLogin,GroupDBNAME, and also the classifier function.After setup above, I use following statement to check what sessions are in each group .
Even if there are spids which are accessing database DBNAME, I can’t see that they fall into the group GroupDBNAME and pool pool_DBNAME.
SELECT s.group_id, CAST(g.name as nvarchar(20)), s.session_id, s.login_time, CAST(s.host_name as nvarchar(20)), CAST(s.program_name AS nvarchar(20))
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_resource_governor_workload_groups g
ON g.group_id = s.group_id
ORDER BY g.name
GO
Following is the code to create pool, group,classifier function:
USE master
GO;
-- Create a resource pool pool_login.
CREATE RESOURCE POOL pool_login
WITH
[Code] ....
-- Create a workload group to use this pool.
CREATE WORKLOAD GROUP GroupLogin
USING pool_login;
GO
CREATE WORKLOAD GROUP GroupDBNAME
USING pool_DBNAME;
[code]....
-- Register the classifier function with Resource Governor.
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.rgclassifier_v1);
GO
Ive just installed VS2005 Pro and SQL Server Developer edition.
Now I want to set the default paths for project, templates, settings. But I want for both VS and SQL files.
For example, right now the default folders for SQL Studio are:
.My DocumentsSQL Server Management StudioProjects
.My DocumentsSQL Server Management StudioSettings
.My DocumentsSQL Server Management StudioTemplates
.My DocumentsSQL Server Management StudioBackup Files
... and so on.
For VS Studio:
.My DocumentsVisual Studio 2005Projects
.My DocumentsVisual Studio 2005Templates
.My DocumentsVisual Studio 2005Code Snippets
... and so on.
I would like for all sub-folders(from SQL Server and VS) to be in one same folder(Development), for example:
.My DocumentsDevelopmentProjects
.My DocumentsDevelopmentSettings
.My DocumentsDevelopmentTemplates
.My DocumentsDevelopmentBackup Files
.My DocumentsDevelopmentCode Snippets
... and so on.
I have tried changing in each applications options the default path for each one, pointing to each folder I listed above.
My problem is that both VS and SQL Server still keeps creating folders in the old locations, for example: ".My DocumentsSQL Server Management StudioProjects". This example happens when I create a new query, and try to save it, it automatically creates that folder.
It only works for a few of them, like the settings folder. Ive managed to create a single folder for SQL and VS setting files.
Is there a way I can join both Application folders? I want to keep both projects files in one folder, both setting files in one folder, and so on...
I hope I explained well my situation.
Thanks!
Currently I have an sql string which looks like this
INSERT INTO tblPDFFiles (fileType,PDFcontent) SELECT 'First test file', BulkColumn FROM OPENROWSET(Bulk 'C://Test.pdf, SINGLE_BLOB) AS BLOB
But the files i am trying to access is on a different shared server called 'test2008' now im told I can access this doing
INSERT INTO tblPDFFiles (fileType,PDFcontent) SELECT 'First test file', BulkColumn FROM OPENROWSET(Bulk '
\test2008 estpdf.pdf', SINGLE_BLOB) AS BLOB
Which I made sure had the pdf there I get the following error
SSIS package "Package.dtsx" starting.
Error: 0xC002F210 at Execute SQL Task, Execute SQL Task: Executing the query "INSERT INTO tblPDFFiles (fileType,PDFcontent) SELECT 'First test file', BulkColumn FROM OPENROWSET(Bulk '\test2008 est.pdf', SINGLE_BLOB) AS BLOB" failed with the following error: "Cannot bulk load because the file "\test2008 est.pdf" could not be opened. Operating system error code 5(Access is denied.).". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Task failed: Execute SQL Task
Warning: 0x80019002 at Foreach Loop Container: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "Package.dtsx" finished: Failure.
How do I give access to this other server to my SSIS package
I'm curious if there's a "best practice" for setting up the data directories MS SQL will use for each operation? I've allocated independent disks for things like C: (OS), E: (DATA), etc etc etc but I'm not familiar w/ MS SQL to understand how DBA's commonly configure the folders under each unique disk for things like DATA, LOGS, BACKUP, INDEXES, and TEMPDB. Should I have an identically name folder as show below in my example?
You can see I've just mirrored the drive name to a new folder under the partition so data is being written to: F:DATA and E:LOGS. Is this considered correct / good practice? I assume naming the folder in each mount point to whatever I logically called the drive is correct but if I should change how I configure my drive paths above. I'm trying to learn common good SQL Server practices and while I work on properly installing SQL Server 2012/2014, I want to make sure I configure my partition names SQL will utilize correctly.
I have a problem where I need to select all top level file paths from a string value in SQL
So I have a column "Locations"
Example Data:
X:folderanotherfolder
X:folderyet another folder
X:foldernameanother folder
X:foldernameyet another folder
I'd want to return only:
X:folder
X:foldername
I need to somehow parse the sting and capture anything before the second ''?
Hello. I have a little problem .
I have SQL Server Express 2005 installed on my machine. As well as SQL Server Management Studio. For some practice, I also installed the AdventureWorks sample database and attached it to the Server. When I open SQL Server Management Studio it's right there, in the Object Explorer, sitting nicely along with the master database and so on.
Now, I have recently installed Visual C# Express 2008, in order to explore this exiting new thing called LINQ (Language Integrated Query) . To do so, I created a new Project (say, WPF or Console application), and once the project is up and running it would be nice to attach some data to it so we can query that data and learn about LINQ.
There is a problem, however. On the main menu, I click on Data-Add new data source, choose Database, click on New Connection and the Add Connection window comes up. As a data source, I specify Microsoft SQL Server Database file.
For a database file name, I navigate to MSSQL.1/Data and in that folder, there it is, the AdventureWorks file. Now, here is the rub. When I try to attach this file, I get a window telling me that I don't have permission to open this file! Contact the file owner or administrator! How about that?!
Can anyone help? It would be very much appreciated!
Thanks.
Hi
I have sql 2000/2005 installation path errors in some of prodution servers. Like we have standards that backup files should go to E: drive and data files to f: and log files to h:. Can any one help me in this issue what can be done with out reinstallation,
Thanks in advance
We have a growing issue where we have a relative dtsconfig file (which stores the absolute base path of the ETL packages). This way we can keep the ETL projects failry portable - only having to modify one value in the dtsconfig file. The master package that defines the dtsconfig location (which is config/Default.dtsconfig) usually interpretates this location to be relative the project. The problem is that every now and again when you open this package in .NETStudio, the path is interpreted differently and causes: config/Default.dtsconfig to state invalid path. But when we delete the variable (which defines the dtsconfig path), save/close and open/recreate it works again. This may or may not be supported MS method, but I was curious to know why this gets messed up. Is there somehwere in the .NET framework that defines what "/" is relatively under?
For example: Our absolute config path is "D:Program FilesMicrosoft SQL Server90DTSPackagesETLProjectETLBaseconfigDefault.dtsconfig" but using: "config/Default.dtsconfig" for xml file value works. However, sometimes we will get an error stating that this file cannot be found, and when we just try to delete (without saving and closing) and immediatelly try to put "config/Default.dtsconfig" again and hit next, we get an error and the path is now:
'D:Program FilesMicrosoft SQL Server90DTSPackagesDEVDataExchangeETLBaseconfigconfigDefault.dtsconfig'.
Ideas?
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Collections;
public partial class UserDefinedFunctions
{ [Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName="Obj_Row",
IsDeterministic=true,IsPrecise=true,
TableDefinition="ObjID int,OjbDataID int,ObjDataValue nvarchar(400)",DataAccess= DataAccessKind.Read)]public static IEnumerable Obj_IDs(SqlInt32 Data_1, SqlInt32 Data_2, SqlInt32 Data_3)
{using (SqlConnection conn = new SqlConnection("context connection=true"))
{
try
{ if (!Data_2.IsNull)
{string sql = @"Select Obj_ID, Obj_Data_ID, Obj_Data_Value from tbl_Obj_2";
SqlCommand cmd = new SqlCommand(sql, conn);SqlDataAdapter da = new SqlDataAdapter(cmd);DataTable dt = new DataTable();
da.Fill(dt);
conn.Open();return dt.Rows;
}if (!Data_3.IsNull)
{string sql = @"Select Obj_ID, Obj_Data_ID, Obj_Data_Value from tbl_Obj_3";
SqlCommand cmd = new SqlCommand(sql, conn);SqlDataAdapter da = new SqlDataAdapter(cmd);DataTable dt = new DataTable();
da.Fill(dt);
conn.Open();return dt.Rows;
}
}catch (Exception ex)
{
ex.Message.ToString();
}
finally
{
conn.Close();
}
}
}public static void Obj_Row(Object item, out int ObjID, out int ObjDataID, out string ObjDataValue)
{DataRow row = (DataRow)item;
ObjID = Convert.ToInt32(row["Obj_ID"]);ObjDataID = Convert.ToInt32(row["Obj_Data_ID"]);ObjDataValue = row["Obj_Data_Value"].ToString();
}
};
//Error 1 'UserDefinedFunctions.Obj_IDs(System.Data.SqlTypes.SqlInt32, System.Data.SqlTypes.SqlInt32, System.Data.SqlTypes.SqlInt32)': not all code paths return a value
I'm newbie. Please, show me how to correct the problem. Thank you.
This has probably been covered in other posts. I have been working with SSIS for the past month and I am trying to follow best practices on various items. Having worked with a different ETL tool prior to this, I am wondering what is the best approach to use for Connections and File Paths.
What I would normally do with DataStage for this would be to assign a Job Variable (and eventually Sequence Variable) of the type: Path. So, if I was developing a job I would create SourceFilePath, ErrorFilePath, etc. I would use these variables in a FlatFile or Dataset Stage. For instance I would assign a filename for a source as: #SourceFilePath#SourceFile1.txt. During execution the job would load the variable and then the filename would be: C:MyDocumentsDatafilesSourceFile1.txt.
When its time to move to another environment, I don't have to worry about changing values for file connections because it is managed dynamically by a config file or whatever method.
What is the best practice that emulates this behaviour for SSIS? I've been thick and can't get my head around this. Any direction to blogs or user sites would be great. Examples, even better!
Thanks in advance.
Hi:
I am trying to set up a TEST ENIVORNMENT for a reservation software package. I need this setup so that I can run various scheduling scenarios in order to optimize operations. Below I have been given instructions from the software vendor on how to set up my SQL database. However, I am a little confused on what to do for a couple of the steps. They are as follows:
***In SQL server 2000 enterprise manager on the test laptop, change the 2 path settings in the table SGCONFIG. The should be changed to C:StratagenAdept5Server. ** YOU WILL NEED TO DO THIS STEP EVERYTIME YOU RESTORE A BACKUP ADEPT5_CLASTRAN.BAK FROM THE PRODUCTION TO THE TEST ENVIRONMENT
QUESTION: How do you change the path settings on a table?
***Open the .udl files in the apps folder and the server apps folder and check to see that they are pointing to the test laptop server, not the production server.
QUESTION: What are the .udl files and how do you check to see if they are pointing at the test laptop server only?
Thanks sincerely for your help. I am trying to meet a deadline for a meeting tomorrow. Therefore, I am desperate. Please send me email. rtanner@clastran.com.
Ron
Hi:
I am trying to set up a TEST ENIVORNMENT for a reservation software package. I need this setup so that I can run various scheduling scenarios in order to optimize operations. Below I have been given instructions from the software vendor on how to set up my SQL database. However, I am a little confused on what to do for a couple of the steps. They are as follows:
***In SQL server 2000 enterprise manager on the test laptop, change the 2 path settings in the table SGCONFIG. The should be changed to C:StratagenAdept5Server. ** YOU WILL NEED TO DO THIS STEP EVERYTIME YOU RESTORE A BACKUP ADEPT5_CLASTRAN.BAK FROM THE PRODUCTION TO THE TEST ENVIRONMENT
QUESTION: How do you change the path settings on a table?
***Open the .udl files in the apps folder and the server apps folder and check to see that they are pointing to the test laptop server, not the production server.
QUESTION: What are the .udl files and how do you check to see if they are pointing at the test laptop server only?
Thanks sincerely for your help. I am trying to meet a deadline for a meeting tomorrow. Therefore, I am desperate. Please send me email. rtanner@clastran.com.
Ron
What are the SQL 2005 Upgrade Paths? For example, is there a direct in-place upgrade for SQL 6.5? SQL 7.0? Can someone provide a link please? Thanks in advance.
View 6 Replies View RelatedI have created over a hundred reports and each of them are scheduled to output a pdf. The problem is this was temporary and each week I would like to output the reports to a directory that is based upon time. For example, I have 13 groups of reports and each group has 9 reports. Each group has it's own directory and folder. IE group A would be saved in something like \client filesgroup aweekly reportsMar 17 2008. The next week the reports for the same group need to be saved in \client filesgroup aweekly reportsMar 24 2008. This would be the same for each of the groups; group A, group B etc.
The question is there a way to set these paths dynamically or at least iterate through the subscriptions and change the paths?
Cheers