Right now, we have the production and development databases and ASP pages on one server. This is causing a strain on the performance,
of course.
Well, the finally, the customer agreed to get 2 less powerful servers. This is what we are planning to do with the 2 extra servers.
Server 1 : Original Server (fast)
We will only have the Production Database (NO IIS)
Server 2: New Machine23 (faster box than Server 2)
Offload IIS away from the production Database so we can allow more memory for Database processing.
Server 3: New Machine 3 (slower box we just got)
Move the entire development environment to one server (SQL Server Database + IIS with all the ASP pages)
Does this sound OK?
or should we have IIS sitting on one box (holding pages for Development and production) and the other machine just holds the
Development database.
I would like to hear other opinions/stories on different IIS & SQL Server configurations to maximize performance.
I was wondering if anyone has a neat (preferably automated) method of creating small testing databases from large production instances. My requirement would be to copy the schema and a subset of configuration data from a production database into a test database. The subset of data would be a full copy of a subset of tables, rather than a subset of data within one or more tables. There is a mixture of SQL2000 and SQL2005 servers involved in this requirement. I'm familar with the scripting mechanisms of Enterprise Manager and Management studio and DTS packages, sufficent to perform a process like this manually, but want to productionise and schedule this process to be performed automatically. I'm sure this must be a commonly performed task, so I'm interested to know if anyone has a "best practice" for this requirement.
If I have stored procedures that reference linked servers, is there any way to avoid changing the procedures when moving between environments (dev, qa, prod)?
Do i resort to dynamic sql? Use case statements to switch between environments?
Is there a way to simulate hosting three environments on one Report Server. Due to resource limitations, I would like to set up three environments DEV, TEST, STAGE on one report server such that the data source for each environment would be different. Can I use a different folder structure for each and then deploy the reports by providing server url accordingly?
e.g. http://servername/DEV_Reports or should it be (http://servername/ReportServer/DEV_Reports)
I was a Oracle Developer / DBA on Unix Environments all along my career, Very recently iam starting to manage a SQL Server 2005/Windows 2003 Server setup.
Part of my new job is to automate to load Huge Data files/Flat Files (3/4 GB in size) into SQL Server 2005 DB.
Have these initial questions.. Since the files are too large to open at once... What sort of Command Line Interfaces people use on the Windows Boxes.. like doing a "wc" (Word Count) / GREP 'ng Files / Massaging Data Files one line at a time (Like using SED / AWK Commands).. Etc
I have a stored proc that is executing in 2 sec on production and test database. It is taking more than a min on dev environment.
I have verified sqlserver version is same on both of the server.Prod is running on 2012Sp1 however dev don't have sp1. I am downloading it.
Both are 64bit, has same collation and compatibility level.I have confirmed that sp on both servers has same execution plan. I have reset and import stats from prod too.
This has probably been covered in other posts. I have been working with SSIS for the past month and I am trying to follow best practices on various items. Having worked with a different ETL tool prior to this, I am wondering what is the best approach to use for Connections and File Paths.
What I would normally do with DataStage for this would be to assign a Job Variable (and eventually Sequence Variable) of the type: Path. So, if I was developing a job I would create SourceFilePath, ErrorFilePath, etc. I would use these variables in a FlatFile or Dataset Stage. For instance I would assign a filename for a source as: #SourceFilePath#SourceFile1.txt. During execution the job would load the variable and then the filename would be: C:MyDocumentsDatafilesSourceFile1.txt.
When its time to move to another environment, I don't have to worry about changing values for file connections because it is managed dynamically by a config file or whatever method.
What is the best practice that emulates this behaviour for SSIS? I've been thick and can't get my head around this. Any direction to blogs or user sites would be great. Examples, even better!
I want to store each SQL Server's (prod, test, etc) connection string in XML to make deployment & configuration easy. However, the connection string must be encrypted. Does anyone have any ideas or suggestions on how to accomplish both. I know how to encrypt it but I dont think SSIS can work with it.
Another DBA on my team is trying to tell me that it is a widely accepted best practice to only have a single version of SQL server in the enterprise environment. This seems counter intuitive to me and I cannot find a confirming source for this assertion.
I would think that it largely depends on your environment but that once a new version of SQL server has been decided on for a given application, for whatever reasons, that new version should be used moving forward on all new development.
It also seems like that as other databases grow and hardware comes to its end of life that older databases be either migrated or ported to the new version as seems appropriate.
Is there any reason an enterprise should always be on X version and only X version?
Hi,It appears that binary_checksum can give the same checksum fordifferent strings, which is a bit worrying. (I guess the algorithm isthe problem in the context of a repeating pattern.)e.g.select binary_checksum('A'),binary_checksum('AAAAAAAAAAAAAAAAA'),binary_checksum('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA A'),binary_checksum('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAA')My question...Is this approach to generating checksums adequate for managing theobject scripts in the SQL Server to ensure that they haven't changed. Iguess that the probability of somebody making a change to a script andending up with the same checksum is almost negligible. Has anybody usedthis approach in an FDA validated production environment, i.e. 'no ifs,no buts'? Would it stand up to scrutiny?Any experiences, thoughts?RegardsLiam
We have multiple development environments on one machine for our VisualStudio .net projects.These are web applications so we have organized them by IP Address andHost Headers.When I installed Reporting Services, it installed into my default website (localhost) and that is fine.However, we want to be able to create our reports in our different .netprojects. The only projects that will allow me to create reports arethe projects using the default website.When I right click to create a new project, the Business Intelligenceprojects (reports) type is not there.Can I do this? If so, how do I configure it?
We are upgrading a production database server to new hardware. The server is currently running SQL Server 2000 Standard Edition. We are thinking about installing SQL Server 2000 Enterprise Edition, however that would mean the test server (2000 Standard) and production server (2000 Enterprise) have different edtions of SQL Server. How much of a risk does this present? Later in the year we would upgrade test to SQL Server 2000 EE, but for a couple of months the environments would be different.
I modified an existing package created by someone (who left the organization). Its a ftp bulk insert from flat files to the tables. The need was to updates 2 tables out of 9 with additional columns. I made the changes to the package and it runs successfully on my laptop.
After that I build the package and tested it on the development server, although the package runs successfully but the data is written to my DB tables in my laptop instead of development server DB tables!.I see "." in the place of the connection manager server name. The DB name is the same in all the three environment (local, development & production). There is a configuration table used in the package.
SSIS is behaving differently in different environments but the code is same.
One thing is nor working correctly that is I am converting a string data type column to float data type in data conversion. In our local environments the package is working fine but in production environments it is not working correclty. It is unable to convert the data it is throwing an error.
"The data value cannot be converted for reasons other than sign mismatch or data overflow"
Is it possible to config environments ( dev,tst,prd ) in VS Tabular solution , like it is done in SSIS ? i find Variable tab , but i dont find the way to set a connection (string) to be change dynamically between the environments.
Hello. What I'm trying to do should be fairly simple, but after an hour of searching and trials, I can't seem to get it working.When I run my website on my test machine, I would like it to use a certain value for my connectionString. But when it's run on the live server, it should use a different connection string. This way, when it's on the test server it'll use the test database.In my Web.Config file, I have this:<connectionStrings> <add name="myConnectionString" connectionString="Data Source=TEST_DB;Initial Catalog=tester;Integrated Security=True" providerName="System.Data.SqlClient" /></connectionStrings>I would like to be able to change the connectionString in something like the Global.asax file, when the application is loaded. I'd check for a certain environment variable, or some other condition, and change the value of the connectionString.I tried this in my Global.asax, but it told me the value was read_only: ConfigurationManager.ConnectionStrings["myConnectionString"].ConnectionString = "Data Source=LIVE_DB;Initial Catalog=live;Integrated Security=True" Does anyone have a good way of doing this, so I don't need to juggle Web.Config files when I publish my site? Thanks.
I have encountered a problem with a specific set of tables. The same select yields slightly differing execution plans in two different environments (instances). But the slight variation seems to contain a huge differences in stats. I don't know the significance of these stats. The two tables have the exact same indices.
This is the selcet statement:
SELECT 'xx' FROM DUKS.dbo.Profiler WHERE DNA_Løbenummer IN (SELECT DNA_Løbenummer FROM DUKS.dbo.Effektregister WHERE Sagsnummer = '2015-00002')
How to manage the progression of SSRS reports from DEV > TEST > USER > PROD...I can see that you COULD manage this all from within VS however then all the control is in the developers hands and even though you can control access with permissions and stuff, its not very user friendly to provide a tester or business area manager or project manager with VS just so they can publish reports as they are approved.Are their any such tools to manage migration between ssrs servers or something that you could have a user test a report and then do something like "approve/promote" it to the next environment, or is this something we would need to create ourselves?
I can see that there are a few things that allow PROJECTS to be migrated (powershell scripts and something about octopus deploy) but these all seem to be too "big" for our organisation - we generally develop reports as a "logical group" (project) but migrate them individually or in VERY small groups (5 is probably the maximum), however keeping tabs on what report is at what version in what environment is causing our DBA massive migranes...Maybe there is something we can do from TFS or perhaps we need to build a utility app?
In SQL Server 2005, the SQL Server full text engine (i.e., Microsoft Full Text Filter Daemon Launcher windows service) was installed as a Generic Service cluster resource in a Windows clustered environment.
However, in SQL Server 2012, the full text engine is not appearing as a "resource" within "Failover Cluster Manager" after installation. Do we need to manually configure the full text engine as a Failover Cluster Manager "resource" for SQL 2012? Or is this not necessary? If it needs to be configured, how do you go about doing it (i.e., what resource dependencies would you setup, etc.)
I'm having a struggle with the printing of landscape reports in SSRS.
The reports appear fine when printed from the Visual Studio environment, but when I deploy to a web environment I get 1 or 2 blank pages before reporting continues (i'm using the SSRS printer icon, not the browser printer icon).
I've spent ages looking at margin and printer settings and can't find anything wrong - especially as it works under Visual Studio.
Has anyone experienced problems of this nature and how did you resolve them?
We are setting up a new Reporting Services 2005 enterprise reporting tier that will support multiple developers, applications, and end users. We will have mirrored environments including development, test, and production each with their own database cluster, and reporting server.
We have multiple report developers who share a single Visual Studio solution which is saved in SourceSafe and is setup to have separate report projects for each business unit in the orgainzation. Each report project is mapped to a specific deployment folder matching the business unit. Using the Visual Studio Configuration Manager, we can simply flip to the envirnoment we want to deploy to and the reports are published to the correct environment and folder structure.
My problem lies with the common data sources. We are using a single master Common Data Sources folder to hold all of the data sources. The trick is that each and every reporting folder seems to have to have it's own copy of the data source in visual studio. There does not seem to be an easy way to change the data sources for the reports when you publish to various environment, i.e. development, test, production etc.
Ideally, we would have a single project for the common data sources that all reporting projects and associated folders would map to, and we would have a way to associate the appropriate data source for each environment when we deploy.
I'm looling for best practices on how to setup data sources for development and deployment in an enterprise environment that uses Visual Studio to develop and publish reports. We have 3 environments, and 6 data sources per environment and about 20 reporting folder / project in Visual Studio. That's 360 changes that have to be manged when deploying reports. Is there a best practices way to do this?
There has got to be a better way? Can anyone give me some insite into how to set this up?
We are setting up a test lab environment with 100 machines. We want one master testing db that gets replicated to each to run scripted application tests nightly.
My goal is to minimize the amount of work to move this thing to each of the 100 test machines. I am wondering if we need to even have the sql local and invest in a monster db server with 100 copies of the db we restore and each test machine point to their own db on that server, or if I should use db mirroring or something to get the master test db to each of those machines instead.
I am looking for a SQL Backup/Restore tools which can restore multiple environments. Here is high level requirements.
1. We have 4 DBs, range from 1 TB - 1.5 TB Each Database. When we restore to QA, DEV, or Staging, we usually restore 4 of them. 2. I am looking for the speed to complete restoring between 1 - 2 hours for 4 DBs.
I am evaluating the Dephix Software but the setup is very complex and its given us a lot of issues with Windows Authentions, and failure in the middle of the backup. I used Guess Software many years ago but can't find it on the web site any more. Speed is very important for us mean complete restoring as fast as possible. We are on SQL 2012 and SQL 2008 R2.We are currently using NETAPP Technology and I have Redgate Backup Tool but I am mainly looking for fast Restore Process.
My server is a dual AMD x64 2.19 GHz with 8 GB RAM running under Windows Server 2003 Enterprise Edition with service pack 1 installed. We have SQL 2000 32-bit Enterprise installed in the default instance. AWE is enabled using Dynamically configured SQL Server memory with 6215 MB minimum memory and 6656 maximum memory settings.
I have now installed, side-by-side, SQL Server 2005 Enterprise Edition in a separate named instance. Everything is running fine but I believe SQL Server2005 could run faster and need to ensure I am giving it plenty of resources. I realize AWE is not needed with SQL Server 2005 and I have seen suggestions to grant the SQL Server account the 'lock pages in memory' rights. This box only runs the SQL 2000 and SQL 2005 server databases and I would like to ensure, if possible, that each is splitting the available memory equally, at least until we can retire SQL Server 2000 next year. Any suggestions?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?
When I proposed start to use SQL Server 2005 for new VS 2005 web sites, one of my co-workers responded that we will update the old SQL Server 2000 databases to SQL Server 2005 when we are ready to use 2005 SQL Server.
Questions: 1. Any expected problems to upgrade old 2000 databases to new 2005 SQL Server? 2. I have installed both 2005/Management Studio Express and 2000/Enterprise Manager in my PC. Any expected problems when running both 2000 and 2005 SQL Server at the same database server? 3. What is the best configuration for running SQL Server 2005 when we have old 2000 databases? Upgade or not upgrade?
I am getteing need help Query analyzer error Unable to connect server local Msg17, level 16,state 1 ODBC SQL server driver [DBNETLIB]SQL server does not exist