I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
How you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.
[URL] ...
However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.
object_name counter_name instance_name cntr_value cntr_type SQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712
I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time
Currently i have set of queries which i run for data extraction and result pasted in Excel table so that my pivot table and Chart gets populated along with the summary in Excel file.
I would like to automate this thing and want to know if i can achieve excel output for dashboard on a click of button.
Suppose someone has to work on a lot of different SQL Server Databases which have got a lot of Tables and Queries / Views inside them.
After a period of time, it becomes very difficult to remember exactly what kind of columns are present within a given Table and View.
Any method by which one can keep a systematic list of all the Tables and Views that are present within a SQL Server Database, along with the columns that are present within them.
Are there any Add-on products or services etc. available in making this type of work systematic?
Currently I add comments to each queries inside SQL Server to remind me of what this query is doing, but this method is not great.
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Typically, this table is accessed with a query that includes:
SELECT ... FROM SummaryTable WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
I have a remote server with SQL server 2014 instance on it. There is nothing else running on the SQL Server box(dedicated SQL box). There is only one instance of SQL 2014 on the server. No other versions of SQL server are on the server.
Issue:
1. When I execute a query connecting to the SQL server instance through my local SSMS, the query executes in 30 secs.
2. When i connect to remote server through windows RDP session and execute the same query in the SSMS(on server), then query executes in 1 minute.
We carried out an in-place upgrade on our production server on Saturday - going from 2008 R2 to 2014.
We had tested this method out in dev/test and pre-production with only minor post issues to fix.
However, on production we had an issue whereby checkdb was hitting 100% CPU and caused overnight processes to hang. The checkdb statement was terminated and disabled by a colleague at 1 am.
Since then we have restored this database to a dev server and ran checkdb against it with no_infomsgs and all_errormsgs but it still hasn't finished since Monday morning!
The database is just over 800 GB and whilst checkdb was crippling the cpu, logical reads are less than one. However, sp_whoisactive is showing that it has done 56 million reads so far and this number increases periodically so it looks like it keeps going back to re-check the database with a deep dive.
Also, on a different environment, we ran check table statements and one of them took over 9 hours for a single table but came back clean (see attachment).
We need to wait for the output but the database is still in use in production and the mess will just get worse if it is indeed corrupted.
I've been using partitioned views in the past and used the check constraint in the source tables to make sure the only the table with the condition in the where clause on the view was used. In SQL Server 2012 this was working just fine (I had to do some tricks to suppress parameter sniffing, but it was working correct after doing that). Now I've been installing SQL Server 2014 Developer and used exactly the same logic and in the actual query plan it is still using the other tables. I've tried the following things to avoid this:
- OPTION (RECOMPILE) - Using dynamic SQL to pass the parameter value as a static string to avoid sniffing.
To explain wat I'm doing is this:
1. I have 3 servers with the same source tables, the only difference in the tables is one column with the server name. 2. I've created a CHECK CONSTRAINT on the server name column on each server. 3. On one of the three server (in my case server 3) I've setup linked server connections to Server 1 and 2. 4. On Server 3 I've created a partioned view that is build up like this:
SELECT * FROM [server1].[database].[dbo].[table] UNION ALL SELECT * FROM [server2].[database].[dbo].[table] UNION ALL SELECT * FROM [server3].[database].[dbo].[table]5. To query the partioned view I use a query like this:
SELECT * FROM [database].[dbo].[partioned_view_name] WHERE [server_name] = 'Server2'
Now when I look at the execution plan on the 2014 environment it is still using all the servers instead of just Server2 like it should be. The strange thing is that SQL 2008 and 2012 are working just fine but 2014 seems not to use the correct plan.
I am trying to write a query to calculate the running difference between data on different dates. Below is what my table of data looks like. Basically I want to calculate the difference between the total_completed for each state and date.
below is my code (I almost have what I need) I just can't figure out how show 0 as the completed_difference for the first Date for each state since there is no prior date to calculate against.
MRR_TOTALS_WEEK_OVER_WEEK AS ( SELECT T1.[Date] ,T1.States ,T2.Total_Completed ,ROW_NUMBER() OVER(PARTITION BY T1.States ORDER BY T1.States,T1.[Date]) AS ORDERING FROM TOTAL_CHARTS T1 LEFT JOIN TOTAL_COMPLETED T2 ON T1.[Date] = T2.[Date] AND T1.States = T2.States )
I need to search for such SPs in my database in which the queries for update a table contains where clause which uses non primary key while updating rows in table.
If employee table have empId as primary key and an Update query is using empName in where clause to update employee record then such SP should be listed. so there would be hundreds of tables with their primary key and thousands of SPs in a database. How can I find them where the "where" clause is using some other column than its primary key.
If there is any other hint or query to identify such queries that lock tables, I only found the above few queries that are not using primary key in where clause.
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
I have a SQL Server 2008 instance that is running on "LiveServer" our production database (ProdDB) - and we need to upgrade to 2014. In order to do some upgrade testing, I spun up a VM with the same version of SQL server on the test VM (TestServer), did a backup of the production DB from the live server, and restored it to TestServer under a different name (ProdDBUA).
I then installed SQL2014 Upgrade advisor onto TestServer, and ran it, checking all the boxes (reporting services etc..) and it all came back clean - no issues whatsoever - not a single warning even. I'm under the impression that stored procs/functions etc... all reside within the DB, so a backup will include those. Is that correct?
The problem is, I know I have stored Procs, functions and views that use deprecated joins in that LiveServer.ProdDB. What do I need to do/configure/check in order to make sure that the Upgrade Advisor is actually checking through all that T-SQL that has deprecated code? I want to have a list to give to my report writers of procs/functions/views that need to be rewritten prior to the upgrade going live.
If there is a modification that needs to be run on the TestServer.ProdDBUA, a cursor to change the path etc. DB is running in Compatibility mode 90.
I have a situation where I need to do some reporting for simular asset records but the data is listed on different database from the same server. I was curious as to the way to write the queries that I will need.
Lets say I have 3 databases and I am looking for a asset of some type and the ID is a serial number. The serial will be on the main table from each database. Is there a way I can set a relation from a view between these records?
I.E.: 1: Database A -DSNA_tblMaiin - Serialnumber 2: Database B -DSNB_tblMaiin - Serialnumber 3: Database C -DSNC_tblMaiin - Serialnumber
I have simple query which creates tables by passing database name as parameter from a parameter table .
SP1 --> creates databases and calls SP2--> which creates tables . I can run it fine via SSMS but when I run it using SSIS it fails with below error .The issue gets more interesting when it fails randomly on some database creation and some creates just fine .
Note** I am not passing any database of name '20'
Exception handler error :
ERROR :: 615 :: Could not find database ID 20, name '20'. The database may be offline. Wait a few minutes and try again. ---------------------------------------------------------------------------------------------------- SPID: 111 Origin: SQL Stored Procedure (SP1) ---------------------------------------------------------------------------------------------------- Could not find database ID 20, name '20'. The database may be offline. Wait a few minutes and try again. ----------------------------------------------------------------------------------------------------
Error in SSIS
[Execute SQL Task] Error: Executing the query "EXEC SP1" failed with the following error: "Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.I have sysadmin permission .
WE have a query which pulls revenue by country and client for the last 3 years. Right now we have each year being reported in separate columns but we would like to have the revenues for each year for each client to appear on one row. Below is the current query we have setup.
SELECT p.country_code, p.local_client_code, wwc.local_client_name, case when pr.fiscal_year = 2015 then sum(pr.local_consulting_fees*er.rate) + sum(pr.local_product_fees * er.rate) + sum(pr.local_admin_fees * er.rate) + sum(pr.local_misc_fees * er.rate) else 0 end as '2015 Revenue',
I am using Server 2012 and very new to SQL. I have a request from a physician for a list of his patients that meet a criteria. This is stored in a temp table names #cohort.
Using this cohort he wants each row to be one patient with a list of labs, vitals, etc. Three items are the most recent lab value and date. I could query each lab individually and place it into a temp table and then join all temp tables at the end, but I am trying to move past that and have all labs in one temp table. All temp tables are joined with PatientSID.
I tried to do something for just 2 labs, but it is not working. There could be nulls values when joined with the #cohort table.
Individually the SELECT statements pull in the most recent lab value and date, but I cannot get them into a temp table with one row of PatientSID and then the lab value and date if they exist.
IF OBJECT_ID ('TEMPDB..#lab') IS NOT NULL DROP TABLE #lab SELECT cohort.PatientSID ,SubQuery1.LabChemResultNumericValueAS 'A1c%' ,SubQuery1.LabChemCompleteDateTimeAS 'A1c% Date' ,SubQuery2.LabChemResultNumericValueAS 'LDL'
Our developers have gotten this idea lately that instead of having many small stored procedures that do one thing and have small parameter lists that SQL can optimize query plans for, its better to put like 8-10 different queries in the same stored procedure.
They tend to look like this:
create procedure UberProc (@QueryId varchar(50)) as
if @QueryId = 'First Horrible Idea' begin select stuff from something end if @queryid = 'Second really bad idea' begin select otherstuff from somethingelse end
I see the following problems with this practice:
1) SQL can't cache the query plan appropriately 2) They are harder to debug 3) They use these same sorts of things for not just gets, but also updates, with lots of optional NULLable parameters that are not properly handled to avoid parameter sniffing.
I have a script that needs to be run for 50 different @ClientID. I dont want to run this script individually for each clientid. Would 'SET @clientID in (111, 222, 333) work? I've been told that it wouldn't. Short version of the script is.....
I need to create a stored procedure that will have about 10-15 queries and take 3 parameters. the variables will be: @lastmonth, @curryear and @id @lastmonth should inherit Session variable intlastmonth @curryear should inherit Session variable intCurrYear @id should inherit Session id One example query is SELECT hours FROM table WHERE MONTH ='" + Session("intLastmonth") + "' AND YEAR ='" + Session("intCurrYear") + "' AND [NUMBER] = '" + Session("id") The rest of the queries will be similar and use all 3 variables as well. How can I go about this and how will queries be seperated.
Attempting to build a report were you can place a specific code in the parameter field and it will return all row values based on that particular code. I have a similar report that works great, but the specific code is just in 1 column, the one I'm trying to create has the potential to have that code in up to 20 different spots. I have the report built, but the issue I'm facing is linking the parameter. Is there a way to link 1 parameter to multiple column options?
Here's an example:
Docflo Distribution Group Queue Status Pend1 Pend 2 Pend 3 Pend 4 Pend 5 ABC ABC1 Catch All NEW 123 126 125 621 129 ABC ABC1 Various PENDED 621 123 872 542 630
Right now if I were to link the parameter to the Pend1 field, I would get every line I wanted that had Pend "123", but it would not include any of the lines where Pend "123" was in Pend 2, Pend 3, Pend 4, so on.
How would I link the parameter to more than 1 column so it would return all rows with a specific code no matter which Pend column it was in?
I am playing around in a test environment with SQL Server 2014. I have a question about the default location of the report server databases when you have multiple report server instances installed on one server.
I did a very simple install of SQL Server 2014 with the database and Reporting Services in Native Mode (install only) features selected. Accepting the default locations, I ended up with the following locations as you would expect:
Running the Reporting Services Configuration Manager, I created the Report Server database. After creating the Report Server database, the related files will be located below in the SQL folder as I would expect.
Next I installed another instance, which I called Test, of SQL Server 2014 like I did above. I now have the following folder structure the Test instance as I expect.
I can easily query multiple servers using the multi-server query function in Central Management Server and write some of the results to logging tables. I would like to be able to do this via a scheduled job. So far I am finding that even setting up Master/Target Servers this may not work and the only workaround is either using SSIS, SQLCMD (by basically hard coding the servername) and possibly Powershell.
tell me if they have been successful just using standard jobs and querying against multiple servers?
If I can't save the results to a 'central' database/table (I can do this when in SSMS), but can still query against multiple servers I was thinking I could write the results to a CSV file that a SSIS job picks up.
I have attempted using SSIS to iterate through servers and have been plagued with intermittent connection issues when using a For...Loop container.
I have a query with huge number of case statements. Basically I need to short this query with getting rid of these hundreds of CASE statements.
Because of the nature of the application I am not allowed to use a function, and just wondering if there is a possible way to rewrite this with COALESCE().
SELECT CASE WHEN A.[COL_1] LIKE '%cricket%' THEN 'ck' + ',' ELSE '' END + CASE WHEN A.[COL_1] LIKE '%soccer%' THEN 'sc' + ',' ELSE '' END + .... CASE WHEN A.[RESIUTIL_DESC] LIKE '%base%ball' THEN 'BB' + ',' ELSE '' END FROM TableName A