I have a .ADP application (access 2000 front end, SQL Server 2000 back end)
I basically want to create a new .ADP file and point it to a different SQL Server 2000 Database.
I've copied the current SQL Server DB and recreated it. Same for the .ADP file, but it is currently pointing to the old Db, where do I go to have it point to the new SQL Server DB?
datasource jdbc in UAT Server has been modified to use ms sql driver & point to the UAT sql server . I Really not understand what is exactly meaning of modified and pointing (but i know theory any application need to get access that should point to server )or otherwords we say application is pointig to database, to point one server to other what we do get that . could any one give detail idea really i am missing some basics drivers and application pointing . may i know any book or link to study more about this kind of sutuation
I've started working at an organization that has a sql 2005 cluster with a named instances on it, lets call it Instance1. What they done is to create a dns alias for servername that is the same as the instance name, so when you connect to the sql server you connect "Instance1Instance1".
We want to move to SQL 2014 cluster with AlwaysOn Availability Groups doing the HA/DR, the question is if I create a listener name called "Listener1" is there a way to using DNS or anything else to point "Instance1Instance1" to "Listener1"
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
How to "CREATE credentials pointing to SQL SERVER logins" and "creating proxys for SQL server agent pointing to SQL SERVER login.
As The SQL server login does not display in the CHECK NAMES list when we are trying to select the identity but All the other Windows logins display correctly in this list .
So using the SQLserver login fails to run a SQL server Job (which uses the credentials and proxies pionting to SQL server logins)
I am new to SQL and I am trying to have a column in my table that points to images I have stored in a directory on my system. I cannot seem to get it to work correctly and I am wondering if someone might be able to help. I want to use this image for each record with the other information on my windows form but I cannot seem to get the file pointer right. Any help would be appreciated. Thanks.
now, when in insert a row in "TV" the foreign key "IdCamera" will relate to a row in either "InfraredCamera" or in "DaylightCamera" depending on the "DeviceType" value.
in other words, if i insert a row with DeviceType=0, then IdCamera will have to point to a row in the "InfraredCamera" table. And if i insert a row with DeviceType=1, then IdCamera will have to point to a row in the "DaylightCamera" table.
so, my question is, how can i make the constraints relationship so that the idCamera relates to a row in DaylightCamera or in InfraredCamera depending on the value of DeviceType? should i make 2 foreign keys with allow null? or should i place both relationships to the same foreign key? im not sure what to do
Thanks guys for your help. it is really appreciated!
How do I "point" a set of stored procedures to operate on different linked servers?
In other words, I have the following linked servers:
DatabaseA DatabaseB DatabaseC DatabaseD ...and in the future, there may be added additional linked servers.
All the linked servers have identical schema, but they contain unique data--each linked server represents a company.
I have a database which will contain stored procedures which I will want to operate against these linked servers. How can I "redirect" my stored procedures to operate against a chosen linked server?
If these were not linked servers, but SQL Server databases, I'd be able to replicate the same stored procedures in each database. Then, when I called a stored procedure, it would act against the data in that database. But these aren't SQL Server databases, so that idea is out.
Unfortunately, the USE command cannot be used within a stored procedure, and even if you could, I can't get it to respond to a database name given as a variable. That idea is out.
The only alternative I have left is to use the string catenation facility of the EXECUTE command. Unfortunately, with 100s of complex queries, setting that up is going to be a nightmare.
I am using a WMI Event Watcher task to watch for files dropped into a directory over a network drive.
It seems to work fine when it is pointed to a physical drive name, but not when I use a machine name.
I.e.
This works - SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA "CIM_DirectoryContainsFile" and TargetInstance.GroupComponent= "Win32_Directory.Name="C:\\temp\\folder1\\folder2\folder3""
This fails - SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA "CIM_DirectoryContainsFile" and TargetInstance.GroupComponent= "Win32_Directory.Name=\\\\machineName\\folder1\\folder2\\folder3"
The failing code does not throw any errors (i.e. SSIS thinks the WQL syntax is correct), but it doesn't pick up any dropped files.
Am I doing something incorrect with the syntax, or must I use a physically mapped drive letter?
I am not sure if this is necessarily a simple question, but I'm somewhat new to SQL so I thought maybe there's an obvious answer I just don't know about.
The problem is that I have one "master" table, and a child table that has two foreign key references back to that master table. Both of these foreign key constraints are marked as "on delete cascade" with the intention that should a row from the master table be deleted, any rows that reference that object in EITHER foreign key field should be deleted.
I am wondering why this causes a cycle. It seems logical enough to me, it just involves two passes of the table, one for each affected column.
One of our developers has written a view which will execute completely (returns ~38,000 rows) in approx 1 min out of SQLMS (results start at 20 sec and completes by 1:10 consistently).
However, if he adds a data flow task in SSIS, adds an OLEDB Data Source and selects Data Access Mode to "Table or view" and then selects the same view, it is consistently taking over 30 minutes (at which point we've been killing it). I can see the activity in the Activity Monitor, it is doing a SELECT * from that view and is runnable the whole time.
If we modify the view to SELECT TOP 10, it returns in a short time.
Has anyone run into this problem? Any suggestions? It is very problematic, as if the views change we have to hack around this problem.
I have created a package that use SSIS check pointing for failure-retry mechanism. I knew that when this package fails, on restarting the package it starts from the task where it got failed. Is it possible for me to override this on a custom condition and start the package at an earlier task where it executed successfully? Ex: Check pointing is enabled. FTP task è Write to Staging Table è Write to Target table. Assuming I am downloading XML file through FTP and writing to a table.
FTP Download is successful.
Read from XML file and write to Staging table task failed because of downloaded file is not a well formed XML.
Here, FTP task is completed successfully; it failed only in the second task. When I re-run the package; it starts from the second [Read from XML file and write to staging table] task because of check pointing.
Is it possible for me to restart from the FTP task on a custom condition where it executed successfully?
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
I am able to run SSIS packages as SQL Server Agent jobs with a Control Flow items "File system task", if I move a file (test.txt) from a drive (c on the server (where SQL Agent jobs run) to a subdirectory on the same drive. But, if I try to move a file on a network drive, the package fail.
Historically I've always written a VB script to copy a file from a sharepoint library. I don't like this method because I have to input a username & password in the script and maintain a config file.
Yesterday I was playing around with using a file system task. The sharepoint file has a UNC path so why not? I created a simple test package with a single file system task that copies the sharepoint file (addressed via UNC) to another network location. Package runs fine locally.
I try running on our utility server but am getting a "The file name [SHAREPOINT UNC PATH] specified in the connection was not valid" error. Package is running with a proxy on the server and the proxy account has the same permissions to the sharepoint site (so far as I can tell) as me.
one of my database data file is 100 GB and the log file is 500 GB.DB is in full recovery model and the transaction logs happen once in 6 hours.Even then, the Database log file isn't reducing in size.
I am deploying programatically an Excel 2007 file to a SQL Server 2005 Reporting Server. The problem is that if a file with the same name already exists, that file isn't replaced. I would like the opposite to happen. I'm using the following code:
--Executable
set svr=http://w3sdwsqld1/reportserver set src_fld="\w3sdwsqld1\deploy\SAD\ECRANS\UPDATES_20061127_190000\Ecrans\AM\Associados\" set dest_fld="Associados" set script="\w3sdwsqld1\deploy\SADECRANS\UPDATES_20061127_190000\Ecrans\AM\Associados\PublishReports.rss" REM Sample: deploy.bat http://w3sdwsqld1/reportserver "\w3sdwsqld1\deploy\SAD\ECRANS\UPDATES_20061127_190000\Ecrans\AM\Associados\" "Associados" "\w3sdwsqld1\deploy\SADECRANS\UPDATES_20061127_190000\Ecrans\AM\Associados\PublishReports.rss" for /R %src_fld% %%f in (*.xlsx) do rs -i %script% -s %svr% -v ParentFolder=%dest_fld% -v reportP="%%~nf" -v path=%src_fld% PAUSE
--rss Code
' ' Script Variables ' ' Variables that are passed on the command line with the -v switch: ' ' (a) parentFolder - corresponds to the folder that the script creates and uses ' to contain your published reports
' (b) reportP - corresponds to the report to publish
Dim ROOT As String = "/SAD/Ecrans/Ecrans/AM"
Dim definition As [Byte]() = Nothing Dim warnings As Warning() = Nothing Dim parentPath As String = ROOT + "/"+ parentFolder Dim filePath As String = path Dim report As String = reportP
'Create the parent folder Try rs.CreateFolder(parentFolder, ROOT,Nothing) Console.WriteLine("Parent folder {0} created successfully", parentFolder) Catch e As Exception
Console.WriteLine(e.Message)
End Try
'Create shared data source 'CreateSampleDataSource("Solucao_Integrada", "OLEDB-MD", "Data Source=dwareas1;Initial Catalog=SAD_Solucao_Integrada")
'Publish the sample reports PublishReport(report)
End Sub
Public Sub CreateSampleDataSource(name As String, extension As String, connectionString As String) 'Define the data source definition. Dim definition As New DataSourceDefinition() definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = extension definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True 'Use the default prompt string. definition.Prompt = Nothing definition.WindowsCredentials = False
Catch e As Exception Console.WriteLine(e.Message) End Try
End Sub
Public Sub PublishReport(ByVal reportName As String) Try Dim stream As FileStream = File.OpenRead(filePath + reportName + ".xlsx") Console.WriteLine(reportName)
definition = New [Byte](stream.Length) {} stream.Read(definition, 0, CInt(stream.Length)) stream.Close()
Catch e As IOException Console.WriteLine(e.Message) End Try
Catch e As Exception Console.WriteLine(e.Message) Console.WriteLine("Failed to publish report") End Try End Sub --------------------------------------------------------------------------------------------------------------------
When I send my query results to a file in SQL Server Management Studio, how come I'm seeing the following in Notepad++? FH TEST "FH" which I thought should be in a CHAR(2) data column is there but "TEST" seems to start in Column 6...not column 3 as I would have expected. I was expecting... FHTEST.
Hi all, I have the "Northwind" database in my Sql Server Management Studio Express.
In my C:ProSSEAppsSamplesForChapter02Chapter02 folder, I have the following 2 files: (1) ListColumnValues (MS-DOS Batch File) sqlcmd -S .sqlexpress -v DBName = "Northwind" CName = "CompanyName" TName = "Shippers" -i c:prosseappschapter02ListListColumnVales.sql -o c:prosseappschapter02ColumnValuesOut.rpt (2) ListColumnValues (Microsoft SQL Server Query File) USE $(Northwind) GO SELECT $(CompanyName) FROM $(Shippers) GO When I ran the following SQLcmd: C:ProSSEAppsSamplesForChapter02Chapter02>ListColumnValues.bat I got the following "ColumnValuesOut.rpt" with error messages:
'Northwind' scripting variable not defined. Msg 102, Level 15, State 1, Server L1P2P3SQLEXPRESS, Line 1 Incorrect syntax near '$'. 'CompanyName' scripting variable not defined. 'Shippers' scripting variable not defined. Msg 102, Level 15, State 1, Server L1P2P3SQLEXPRESS, Line 1 Incorrect syntax near 'CompanyName'.
I copied these T-SQL statements from a book and I do not know how to correct them. Please help and tell me how to correct these errors.
I'm not really sure my question belongs to here...
I have a database in Access (from microsoft office, of course), and I want to transfer convert it to SQL Server. Does anyone know how I can do it? It must be very simple, but I haven't found it yet....
please advice me what does .ldf file consist of and can i shrink the .ldf , is it adviceble to shrink after the backup and how frequently it can be done on a production db
please advice me ,can i shrink the .mdf , it is adviceble to shrink after the backup and how frequently it can be done on a production db
Hi All, Please some one help me... I have to insert a csv into one table in sql server. But the problem is the file is in one server and SQL SERVER 2005 is in other server.. how do i insert the file.... please help me.....
I have written my application in vb.net with the data file being a sql server .mdf file currently stored on my hard drive using an instance of sql express.
However the intention is to have the file on a file server so that the client will be on users computers and the database on the server.
I just tested my app with the .mdf file on a file server and got this error:
The file "\mazepcc$DeveloperTestDBTestSQL.mdf" is on a network path that is not supported for database files. An attempt to attach an auto-named database for file "\mazepcc$DeveloperTestDBTestSQL.mdf" failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.
Oh please PLEASE tell me I can access an mdf file on the network!!
I am in BIG s**t if it it is not possible.
If not can anyone please suggest a way to save my bacon?
For the record a SQL Server instance on this or any other server is out of the question!
I thought that as VS.NET provided all the divers etc it would work.
I am in meltdown panic mode!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Hi,My webhost (1and1) is running SQL Server 2000 and their web tool supports the import of .bak files. However, when I try to import my .bak files created in SQL Server Management Studio Express I get the following error:"The backed-up database has on-disk structure version 611. The server supports version 539 and cannot restore or upgrade this database. RESTORE FILELIST is terminating abnormally."I have Googled this error and learnt that 2005 .bak files are not compatible with 2000 .bak files. I'm just wondering if there are any work arounds to this or alternative tools that I can create 2000 compatible .bak files from from 2000/2005 .mdf files.Thanks in advance.