SQL 2012 :: SSDT Installation On Multi-server Platforms
Jun 25, 2014
We have a BI-Stack configuration for server SQL 2012 that includes a gateway server that houses the app, a second server which houses SQL Server DB and SSIS, and a third server that houses SSAS and SSRS. Where should SSDT tools be installed? I assume on the gateway? This is a production environment, so should VS and/or SSDT not be here at all?
The 6:th of march Sql server data tools for Visual Studio 2012 was released.
[URL]
I seem to be unable to install this using the link provided on the blog page. I'm getting a "Same architecture installation" error. Running on the machine is Visual Studio 2012 Premium & Sql Server 2012 (64bit).
I am working in a secured company which doesn't allow live internet connection, therefore I can't install SSDT using Web Platform Installer and we also don't want to install the intire 2012 client tools.
In database projects, VS2015, I want to create some views for me database that will reference another database table using 3 part naming reference.
Works fine in SSMS but when I try and build me project I is throwing up a reference error.
I can't import the other database into this project, so is there a way to suppress the error? I don't really want to exclude these view from the project.
In VisualStudio 2012 in C# projects when you double-click on a identifier (variable name, class name, etc) then all occurrences of that sting in the current file are highlighted.
It doesn't work in SQLServer projects in sql files. It is a shame that although we are in VisualStudio we are still limited in functionality.
I have 2 dbs (SQL 2012) - one contains a trigger that is enabled/disabled by a procedure in the other database. This all works fine.
If I create a Database Project solution in Visual Studio 2012 SSDT (or 2013) for both databases, the stored procedure generates a SQL71502 stating that my trigger name can't be resolved.
To recreate the issue:
CREATE DATABASE DbWithTrigger GO USE DbWithTrigger GO CREATE TABLE dbo.TblWithTrigger( Id int NULL, SomeValue varchar(30) NULL
1. Create a new solution with a project named DbWithTrigger 2. In project settings set the Target platform to SQL 2012 2. Import the DbWithTrigger db into this project 3. Create a new project named DbCallsTrigger 4. In project settings set the Target platform to SQL 2012 5. Import the DbCallsTrigger db into this project 6. Add a Database Reference in DbCallsTrigger for DbWithTrigger
When you build the solution both dbs build successfully, however there are two warnings. One is easily resolved by replacing DbWithTrigger in the body of the procedure with [$(DbWithTrigger)] (db variable name for the reference) but I can't find out how to get rid of the other. Is it a bug?
Disaster Recovery Options based on the following criteria.
--Currently running SQL 2012 standard edition --We have 18000 databases (same schema across databases)- majority of databases are less than 2gb-- across 64 instances approximately --Recovery needs to happen within 1 hour (Not sure that this is realistic -- We are building a new data center and building dr from the ground up.
What I have looked into is:
1. Transactional Replication: Too Much Data Not viable 2. AlwaysOn Availability Groups (Need enterprise) Again too many databases and would have to upgrade all instances 3. Log Shipping is a viable option and the only one I can come up with that would work right now. Might be a management nightmare but with this many databases probably all options with be a nightmare.
More often than not, I typically don't touch DTC on clusters anymore; however on a project where the vendor states that it's required. So a couple things here.
1) Do you really need DTC per instance or one for all? 2) Should DTC be in its own resource group or within the instance's group? 2a) If in it's own resource group, how do you tie an instance to an outside resource group? tmMappingSet right?
i design SP for insert data in 2 tables i need to store list of array in one parameter to complete my query i try the table value but it`s not good for me because table value is readonly and i need to insert data with list of array .....
We have an application that runs Jobs, each of which affect ## number of child objects (usually around 1M). When a thread gets to 5000 updated child objects it bulk inserts into a table called ActionLog with the child Id and JobId.
When the job is complete a sproc SUMs the children from the ActionLog table: select sum(id) from ACTIONLOG where JOBID = @JobId;
It then updates the Jobs table AffectedObjectCount column with the sum(*) from above.
Instead of writing to the ActionLog table and calculating the SUM at the end I would like to do this 'real time'. After the bulk insert I would like to update the AffectedObjectCount column with the number of rows that were just bulk inserted. I tried this in the past and ran into major contention issues. There are usually 20 threads running a job so there exists a lot of potential for deadlocks.
Is there a recommended way to handle updating one column on one row from multiple threads? What is the best practice for a counter like this?
When adding a node to a SQL Server 2012 Standard edition cluster, how I do I identify the location for SQL server shared components and the rest of the SQL Server installation binaries?
When adding a node to a SQL Server 2012 Standard edition cluster all the binaries went to the C: drive default location. We put those files on a different drive when installing the first node. What needs to be done so both nodes have the binaries on the same drives and folders?
I've seen a lot of stuff regarding 64 bit and Jet drivers. Apparently the only way to overcome this situation is using the command line 32 version of DTEXEC found in C:Program Files (x86)Microsoft SQL Server90DTSBinn. Is that trick what SSIS is using under the cover when it successfully executes my export package? Or is there a mean of building the managed application so it uses the traditional 32 bit libraries of Jet/Access?
I (obviously) prefer using managed code approach when loading and executing SSIS from my end user application. Any suggestion would appreciated.
We are running SQL Server 2012 Enterprise on Windows 2012 R2. We have set up A WSFC with the primary Node in our East Coast Data Center and the Secondary Node in our West Coast Data Center each node is in a dfferent subnet. Since we have high latency between the sites we run in Asynchronous mode with manual failover only. My quesion concerns Quorum, everything I read would indicate we need a third node (odd number) (e.g. a fileshare Witness) with only the file share and the primary node having a vote...What I don't see is any need for a file share if we can only do manual failover.
When I fail an availability group between subnets, I am finding that the DNS entry in DNS is staying. So what happens is the Availablity Group listener has 2 records in DNS, one for each IP. This causes the App to timeout at times, since DNS will return either of the two IP's.
We are implementing a multi-site (Windows Server Failover Cluster) WSFC to enable Always On between our primary and DR site. We are not going to use SQL clustered instances. We are not planning to use shared disks. Each node is running a standalone instance of SQL 2012.
I have successfully configured a 3 node multi-site Windows failover cluster with no shared storage. For quorum, I have defined a File Share Witness (FSW). The FSW has voting rights and is in the DR site. The setup looks like this –
WSFC –
•Node A – Site #1 (voting right = 1) •Node B – Site #1 (voting right = 1) •Node C – Site #2 (voting right = 0) •FSW – Site #2 (voting right = 1)
Again - There are no shared disks in our setup. We are not going to use SQL clustered instance. We are going to use Always On with these 3 nodes.
SQL Always On –
•Node A – Site #1 (Primary Replica) •Node B – Site #1 (Readable Secondary) •Node C – Site #2 (Readable Secondary)
All the setup including the “availability group” works properly under this setup. However, a failover to site #2 under DR situation is not working and I know why but don’t know what needs to be done to fix the problem.
The following works fine –
•Automatic failover between nodes A and B (same site – site #1) •Forced failover to node C in site #2 provided at least one of the nodes in site #1 is up (non – DR situation) - this will ensure the cluster is up
The following is not working –
•Forced failover to node C in site #3 when both nodes in site #1 are lost (true DR situation) – This is because the cluster is not up at this point.
I know I have to bring the cluster up somehow and I have not been able to do so by restarting the cluster service.
I tried to run the command to start cluster service.
Question –
How can I FORCE the cluster to come up in Site #2 on node C when it has no voting rights?
I have always worked with even number of nodes and shared disks with traditional clustering. I am not sure what needs to be done in this scenario with 3 nodes and a FSW.
I have a multi-value parameter that I am having a hard time writing a COUNT expression for in SSRS. Here is the situation:
1. If the "(Select All)" in the drop down is selected, COUNT all last names for ALL of the Auditor parameter 2. If a specific or multiple auditors are selected from the drop down, COUNT all last names based on that selection for the Auditor parameter
Currently, I am having it COUNT by ALL and it works but if a specific or multiple auditors are chosen, then the COUNT doesn't work.
I am new to Reporting Services and hope that what I am looking to do is within capabilities :-)
I have many identical schema databases residing on a number of data servers. These support individual clients accessing them via a web interface. What I need to be able to do is run reports across all of the databases. So the layout is:
Dataserver A
Database A1
Database A2
Database A3
Dataserver B
Database B1
Database B2
Dataserver C
Database C1
Database C2
Database C3
I would like to run a report that pulls table data from A1, A2, A3, B1, B2, C1, C2, C3
Now the actual number of servers is 7 and the number of databases is close to 1000. All servers are running SQL2005.
Is this something that Reporting Services is able to handle or do I need to look at some other solution?
I have a DB that is currently not normalized and will be getting about 100K concurrent users that will mostly be doing Read-Only operations from multiple tables.
I am trying to figure out if I should start thinking of having a DB per client (1000 clients) or if I should normalize the database and keep it as a single DB with good indexes and partitioning.
Hardware is not a problem but 100K concurrent users is.
I'm getting a rule check fail "Long path names to files on SQL Server Installation media failed" when installing SQL 2012 Standard edition from a network share.
Issue when building a FCI where after the step of specifying what drives the data, logs, tempdb go, the installer errors with this message:
Exit message: The volume that contains SQL Server data directory D:SQL_DataMSSQL11.MSSQLSERVERMSSQLDATA does not belong to the cluster group.
We are running Windows Server 2012 R2 and SQL Server 2012 Enterprise with Veritas Storage Foundation for Windows 6.1. I've confirmed that the disk group exists and is mounted in Windows and presented as available storage. Symantec has an article detailing this same circumstance but the Microsoft KB article that they refer to only has hotfixes for Windows Server 2012, not R2.
Few questions on a SQL Server 2012 two node active/passive cluster installation on Win2012.
1. What are the permissions required for the user used to install SQL Server 2012 cluster. Does it need to have any rights on DC or anywhere else apart from the local nodes ?
2. Can we give ANY meaningful name to "SQL Server Network Name" during installation ? Do we need to manually configure it anywhere else before or after the installation ?
3. On what scenarios we need to check/uncheck DHCP check box ?
I have a SSDT project that intermittently fails with the following error when I try to publish via Visual Studio by double-clicking the publish.xml file...
(330,1): SQL72014: .Net SqlClient Data Provider: Msg 1205, Level 13, State 68, Line 5
Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
(326,0): SQL72045: Script execution error. The executed script: IF EXISTS (SELECT 1 FROM [master].[dbo].[sysdatabases] WHERE [name] = N'$(DatabaseName)') BEGIN ALTER DATABASE [$(DatabaseName)] SET READ_COMMITTED_SNAPSHOT OFF WITH ROLLBACK IMMEDIATE; END
This is a hyperV client running on my laptop. I'm connecting to the client's VPN inside the hyperV client and using the "runas" method to open Visual Studio to spoof the domain credentials i use on the clients network. This allows me to connect to servers on the client's domain via apps as if I was logged in with appropriate credentials (not sure how to phrase that better). I can elaborate if deemed necessary, but even the client's FTEs are having same problem and they're sitting in the building.
Windows Server 2012 R2 StandardSQL Server 2014 CU6 (12.0.2480)Visual Studio Premium 2013 (12.0.31101.00) Update 4
Windows Server 2012 R2 (VMWare)SQL Server 2012 SP2 (11.0.5058).Here are some observations...
Happens about 60% of the time during the day (I just keep re-running until it deploys successfully)...late at night and on weekends, it doesn't seem to happen with anywhere near as high a frequency.Only seems to happen when I use "Always re-create database" option (don't recall incrementals having this problem...but admittedly I'm not doing many of those as we're still in the full-speed-ahead-development phase)This does NOT happen when I publish to my local SQL Server instance..Deadlock xEvent trace on target instance. Unfortunately it didn't seem to be able to capture all the info and the deadlock graph doesn't render (error below) checking the Single User Mode during the publish. This seems to increase the frequency with which this issue occurs.
There is a multi value parameter called "include" in the report where "Allow Multiple Values" is checked and it has 4 Available values as shown in the attached screen shots and preview of the report is also shown .There is no data set for this parameter and the values will get displayed on the report based on the visibility condition set in the report.Example : If first value is selected then 1 is passed and based on the visibility condition set in the report - the report output is displayed.None is default value and has value 4 and when the report is run with this option i.e. "None" then rest three parameter values are not applicable .
Requirement : -When the end user selects (Select All) Check box then (None) -check box must be disabled or must not appear for selection for the end user -When the end user selects check boxes either of the first three except None then also None check box must be disabled or must not appear for -selection for the end user -when the end user selects a combination of first three then also None check box must be disabled or must not appear for selection for the end user -The None is set as default with a value as 4 and is applicable only when the user does not select either of the first three values and the report will run.
We're having a bizarre issue with installing our SQL 2012 cluster. On the stand-alone instances, we're able to choose Replication and Data Quality Services from the Database Engine Services without choosing Full-Tex and Semantic Extractions for Search. But when installing the cluster, it won't let us choose the other two without it. In fact, if we click on either Replication or Data Quality, it auto-checks ALL of the features.
My coworker discovered this problem and I was able to replicate it. We're using the SP1 install msi.
Why this is happening with a cluster install but not a stand-alone install?
My install appears to be hanging at the as_cluster_ip_address_cluster_config_Cpu64 part and I'm wondering if it's because it's trying to install Full Text.