1.First issue
The port number already allocated was 2059. We have changed that into the default port 1433 in the node. Now we are able to connect the node from the client application. But we are not able to see the configuration manager in any of the environment now.
2. 2nd issue
Cluster Environment
Error while executing the Package The connection details are not loaded in the connection manager tag
I have this simple full text search query that works perfectly on my own computer using sql server 2005 express, however, on the production server(shared hosting)when I added the first 50+ rows, the full text search works perfect, but as the number of rows increases, the full text search can only see the first50+ rows, but not the new ones. Is there any quick solution for this or it's just a common mistake for developers for not properly indexed columns?Is there a way to re-indexed all rows without loosing data on the live server? search query: SELECT TOP 50 *FROM li_BookmarksWHERE FREETEXT(Keywords,@Keywords)
I currently have a SQL Server cluster setup with a Primary DB Server SERVER1 and the Standby server SERVER2. SERVER1 has been failing more than normal is the past few weeks and its takes upto 5 mins for SERVER2 realize that SERVER1 is down. I am looking for a better way to implement a backup server on production with minimum downtime. Please adivse..
Production and development servers are on different domains and they do not trust each other. How do I import data from the table t1 from a database db1 in production and load it into table t1 inside database db1 in development?
Any help would be greatly appreciated.My problem is that I need to set up a backup SQL Server 2000 machinewhich can be used in case of a failure to my primary. All databases(30 as of now) must be an up to the minute exact copy of productionand include most recent changes in data as well as any structurechanges (Tables, Views, SP's, Triggers, Users . . etc).When I tried this using Transactional Replication, the replicationprocess gets fouled up once I introduce any kind of structure changesto the DB. I've considered the idea of doing periodic backups andrestoring it to my backup SQL server, but this does not give me theconcurrency needed with 0 latency.I've seen articles that recommend using Transaction Replication with'Scheduled Table Refresh', and also doing database dumps to restore onthe backup machine, but I have not been able to find any documentationregarding this to try out. How can I implement this type of backupstrategy in SQL 2000?
I would like to deploy several reports to production server, Do i need to install reporting services entire software in order to run the reports or is it possible to just have runtime files installed on it to run the reports.
please help, i have almost 100 reports to be deployed on this server which is located in other country.
Thanks for the helpful information.
(i am using SQL server 2005 / reporting services 2005.)
ok ok, stop laughing. for real, is there any programatic way of doing this? whom ever created this database i inherited (SQL 2000) created the LDF and DATA files on the same drive and in the same folder for that matter. just trying to do a little disaster magament.
This feels like a silly question, but I`m going to ask it anyway...
I have limited SQL Server experience, but have run into a wall with a client`s Web/Access combination. I need to upgrade to SQL Server. I have Beta3 installed on a development box and am very happy with it. Is anyone running this thing in a production environment? This isn`t going to be experience huge loads, so I`m tempted. Tell me if I`m crazy for wanting to try it.
I have a brand new database server with system databases. I need to copy like four production database from another server to this new server. Can i do restore of the last production backups and restore them on the new server without creating the empty databases on the new server.If any one has better approach i will appreciate
I have production server 2000. The server gets disconnected sometime by itself and sometime it is working fine.Sometimes it even doesn't get restarted. Is there any problem with service packs and some performance issues. Can you SQL guru give me best suggestion and how should i proceed.
Hi,I ran test data on my development machine and it took 1 minute toinsert the data. Ran the same set of data on the server and took 5minutes.Check both database and everything is the same. I even copied theproduction DB on my machine and it was taking still about 1 minute.Look at the fragmentation, and all the numbers are better on the serverthan my development machine so it should be faster.In the application I put some timer and discover that the insert istaking 0.015 ms on the server and 0 on the development. So the problemis on the insert.It is a Web application using ASP.NET.Here are the spec of the computers:Development: P4 HT 3.2GHz 1gig memory running WIN XPServer: Xeon 2.8GHz 1.5Gig memory running WIN 2000 serverAny idea how I to pinpoint the problem? I'm not at the point ofthinking that it can be the hardware, but how to verify that?ThanksFrank
Like most enterprise there is the database administrator (dba) and there are the developers(dev). The dba are conservative while the developers are also exploring their options.
One of the current usage I'm experimenting on is to provide data visualization - image for the data. Like most I needed to "create" the System.Drawing assembly in the database, marking it unsafe.
During my testing, my code had some exception and that brought down SQL Server.
I read that the CLR is better compare to the sp_OAs as well as the extended stored procedures written in C++ because it isolates the execution in a separate app domain and termination is clean - in case of any errors, it should not bring down SQL Server.
Also I read marking assembly unsafe void these benefits of isolation.
Instead of having to manage the situation where it involves code review by the dba and asking the dba to take some risk, is there a technique where all CLR code that runs in the production server does not pose stability issues.
How can I apply SP2 on running production server if SP resets all protocols (from TCP/IP enabled on production server to Shared Memory enabled) without disrupting its connections and thereafter its work?
Howdy; I've tried this in the 'tools' area, but that didn't work too well. I suspect, I will have to generate a T-SQL code then schedule it as a job. Why I can't just drag and drop with basic desires, is beyond me, but THAT probably does exist.
anyway here is the problem [this server has many databases, on SQL 2000 sp2] 1. User only wants me to use Monday morning's full backup, which is good in that it doesn't include transaction logs. 2. Restore that data overtop/into Developement db. = good, no data to worry about damaging. 3. User does NOT want me to do this by hand, but schedule it.
ok, a. must do a RESTORE WITH FILELISTONLY from [?] what ?, master? and if I user the *.bak of the production, it has a coded date field in the name entry SO, I would, I guess, have to generate all sorts of wonderful code to find the date and build a file name. Why, because using the FROM DISK = 'F:MSSQLBACKUPDBPRODUCTION_yyyyddmm.BAK' is not going to work with a wild card. Can I do a file lookup using a 'PRODUCTION' prefix into a variable, then use that or should I look for latest file date [remember there are several database backups here], or ????
then. How does one schedule such a T-SQL. Do I save it to some text file, and invoke it using a job scheduler.
Group, I have an ASP.NET web application using the default ASPNETDB that I did in VWD 2005, and deployed to a production server (IIS). I need to make a table addition to the production database (ASPNETDB). I'm trying to use SQL Management Studio to attache the database, but it keeps saying the database is locked by a process. I shut down all the services that would likely be using it, but it's still showing locked. What do I need to shut down to attach the database to make the change? Or is there a better way to do it?
Hey everybody! I'm getting a pretty annoying error on my production server when i want to run an app .. Error Msg (german): SqlDateTime-Überlauf; muss zwischen 1/1/1753 12:00:00 AM und 12/31/9999 11:59:59 PM liegen. Funny thing is that on my client development machine i'm not getting an error at all. The DateTimes I use (C# and SQL Server) are dd/mm/yyyy hh:mm:ss formatted. I also don't write to the databse - only read! Anyone familiar with this issue? Thanks in advance & best regards!
I am in a deep shit with this...its my life and survival...
How to configure a linked server for oracle.
My oracle instance name is dbserver.internal.ca.com and how do i query since it gives me an error when i use the above name i had this come up some time early.
what information do i need to create a linked server for orale on sql server either using ole-db or odbc.
One more question is there a way where i can import data from oracle to sql server with having a oracle client installed on the machine where sql server exists.
What is the right procedure for shutdown a production web sqlserver for maintenace purpose. Checking users by sp_who? Any other monitoring for current connections? lockings?
I have a 3rd party app which gets the error below on my sql 2000 box. I can't change the app or the db, so I am going to need to tweak sql to make the plane fly. The box is a new dell quad attached to a SAN. Runs on Win 2003 with 16 gig of RAM. 450 concurrent users. Anyone think more memory would help? I have the locks set to default. Are locks more of a memory killer, or CPU?
Error: 1204, Severity: 19, State: 1 The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
We have SQl Server in the production with dozens databases that reside on a SAN in the RAID 1 configuration with SAS 10K disks. Few databases have a huge overload on the SAN (more writes than reads). As there are some performance issues we are considering new SAN.
Is there any way we can get average IOPs in this server? I tried to use PerMon but without success. I don't know if I can chose "Total" for Logical drives in PerMon for the counters? I read dozens of pages: use sqlio, take this counters, use that, but no straight way to calculate. I know that SAN could have the relevant info for IOPs but our old one doen't provide it.
how can I calculate it (without use these links or use this counters with no calculation procedure in the end)
Dear All, i'm planning to drop all the non clustered indexes (as they are not congigured well) on production database, and run the latest script to create fresh non clustered indexes on specific columns.
now my doubts are 1)will the replication affect with dropping and recreating of indexes? 2)query to drop all the non clustered indexes on that database.... can i use the query delete from sysindexes where indid>1 will the query works for me to drop all the non clustered indexes? 3)is it necessary to generate a snapshot again after creating the new indexes? or can i drop and run at subscriber also?
please guide me in this regard
Arnav Even you learn 1%, Learn it with 100% confidence.
I have created a report with MySQL using ODBC connection.It works on my localhost server.then when i deployed it on the production server i get error saying.i checked my data source connection to my localhost and both have the same setting.what could be wrong.please help i am very desperate to make this work.thanks. An error has occurred during report processing. Cannot create a connection to data source 'Payment'. For more information about this error navigate to the report server on the local server machine, or enable remote errors
In production server, I just wanted to check the logs for some users so I just run the below query:
SELECT top 100 CONVERT(xml,logproperties),*
from EventLog where loguserid is null order by LogCreateDate descIn table have about 100 millions records.Unfortunately , its take some time and that time my entire server was down.I just want to know is it possible that entire server down it I just run SELECT * query ?
How can I keep a production SQL server and DR SQL server in sync? I'm not referring to data - I can use SAN replication for that. I'm referring to the many configuration settings and objects that are external to the databases..Server settings (xp_cmdshell, etc.)LoginsServer- and database-level rolesServer- and database-level permissionsObject level permissionsMaintenance plans/SSIS packagesSQL Agent jobsJob schedulesDatabase mail configurationObjects (proxies, alerts, credentials, operators)Linked serversScheduled tasksODBC connections
I realize that many of these items can be scripted out, and periodically run on the DR server; but not all of them can be. And some, although they can be scripted, may be problematic to restore on a DR server, such as SQL Agent jobs. And even with backup scripts, it seems like developing a process to refresh a DR server would be overwhelming given the number of elements and exceptions. I'm not adverse to doing the work to create a completely scripted solution, but was curious if one already exists that handles all or most of the elements in my list.
The -de option is because i changed the default security protection level of the package as BOL says that for the default protection level (EncryptAllWithUserKey): "Only the user who created or exported the package can open the package in SSIS Designer or run the package by using the dtexec command prompt utility."
The problem is that the package does not run successfully and vomits a lot of error message that I do not understand, such as:
<errorMessage> Error: 2006-02-15 18:13:43.55 Code: 0xC004706E Source: Data Flow Task DTS.Pipeline Description: The module containing "component "Multicast" (637)" cannot be lo cated, even though it is registered. End Error Error: 2006-02-15 18:13:43.55 Code: 0xC004706E Source: Data Flow Task DTS.Pipeline Description: The module containing "component "OLE DB Destination" (756)" can not be located, even though it is registered. End Error ... Error: 2006-02-15 18:13:43.62 Code: 0xC0048021 Source: Data Flow Task Multicast [637] Description: The component is missing, not registered, not upgradeable, or mi ssing required interfaces. The contact information for this component is "Multic ast;Microsoft Corporation;Microsoft SqlServer v9; (C) 2005 Microsoft Corporation ; All Rights Reserved; http://www.microsoft.com/sql/support;0". End Error Error: 2006-02-15 18:13:43.62 Code: 0xC0047017 Source: Data Flow Task DTS.Pipeline Description: component "Multicast" (637) failed validation and returned error code 0xC0048021. End Error </errorMessage>
Notice that at the end it says that the validation failed.
I looked on the internet but I did not found any information regarding this issue.
I also created a dummy SSIS package that does not have any data flaw but just have a single SQL task doing a single insert in a table. When i execute it on the production server, it runs successfully!
Then what is wrong with my "complex" package that all the data flow components seems to: "can not be located, even though it is registered." ??? To be exact, I have an error for all the data flow components except the script component.
Thanks for any help.
Best regards,
Francois Malgreve
PS:
I also would like to mention that on my production server, only the SSIS service is installed, not SQL Server itself. When you run the setup of SQL 2005, it is indeed possible to install only the SSIS component which is what I need in my case as SSIS packages are run from a middle tier server and connect to various DBs.
I think this is important as I just discovered that the package can run successfully when I run it on a server which has a full version of SQL Server 2005. By that I mean having SQL Database Server + SSIS installed. But it will run only from the command line, if i run it from an asp.net application it won't run successfully and return the same kind of error i showed in this message. in a desesparate attempt to solve my problem, I granted more rights to the ASPNET user and NETWORK_SERVICE user by adding them in the administrator group as I quite believe it might be related to security as on that server it works well from the command line. But it did not help.
Hello, Is SQL Server 2005 mirroring production ready yet. We have two servers and plan to set up mirroring between them. We have the Standard Editon installed on them. Is Standard Edition sufficient or does it need Enterprise Edition?
i need help with deployment of reports (SSRS 2005). we need to deploy many reports (more then 100) on the production everytime we make changes in them.
"the problem is we cannot install visual studio on the server to open the project and use the deploy option from it" i tried using the script - PublishSampleReports.rss
but everytime i run i need to go and manully configure the report to map the shared datasource to it. any way to pass the information in the script itself, or is there any smart way to deploy on production.
I have created a report with MySQL using ODBC connection.It works on my localhost server.then when i deployed it on the production server i get error saying.i checked my data source connection to my localhost and both have the same setting.what could be wrong.please help i am very desperate to make this work.thanks. An error has occurred during report processing. Cannot create a connection to data source 'Payment'. For more information about this error navigate to the report server on the local server machine, or enable remote errors
I have been doing research into the best way to do this for some time. I use SQL Server 2005 Standard so replication is out of the question.
I have discovered Slowly Changing Dimensions, Linked Servers, etc Which may help me bring the data out of Oracle 9i and into SQL. Ideally I want the data refreshed on a daily basis. I want to specify which tables and views are taken from Oracle and manage the relationships in SQL.
Any direction as to best practices for this task would be greatly appreciated. I am fairly new to the game of synchronizing two major server products but have worked extensively with data integration on smaller scales.