SQL Server 2014 :: Order By Month Number In A Group By
Dec 19, 2014Sample Data
SET NOCOUNT ON;
USE tempdb;
GO
CREATE TABLE CheckRegistry (
CheckNumber smallint,
[code]...
How can I get the result orderd by the month number?
Sample Data
SET NOCOUNT ON;
USE tempdb;
GO
CREATE TABLE CheckRegistry (
CheckNumber smallint,
[code]...
How can I get the result orderd by the month number?
I need to group up the records randomly into ‘n’ number of batches. That can be done by NTILE, but I want group up similar records in single group.
Say for example, following is the list of records I have in my table which I want to group into 5 batches
A123
A124
A124
A123
A127
After Ntile I will get the below,
Desired output is, Need output like Ntile but all same id should reside in single batch
Even if I n=5, maximum possibility of batches are 3 only.
I have a table as below.
Timestamp SourceIdChannelIdValue
2015-11-01 16:45:59.95311223,4923
2015-11-01 16:46:00.12712224,9357 *
2015-11-01 16:46:00.32713227,4183 *
2015-11-01 16:46:00.52714221,025 *
2015-11-01 13:17:14.17711223,0304 *
I only want the first value per channel in a new table ( the values marked with *)
I tried with sql below without succes.
set nocount on;
declare @today date;
set @today = CURRENT_TIMESTAMP;
Select * into new_table from (
select @today as 'Now', dateadd(day, -(day(@today)) + 1, @today) as 'FirstOfMonth, Timestamp, SourceId,
[Code] ....
This is my table and data
CVID | WorkExperience
--------------------------------
2838736
68181101
96568122
1135484
I need to convert into this result
CVID | WorkExperience
--------------------------------
283873 years
681818 years 5 months
9656812 years 2 months
1135484 months
I would like to know how to find fifth workingday* of a given month and year.
workingday - working day is just any day apart from saturday and sunday. no need to consider any other holidays.
ex: I would just give month number and year as input and i would like to know date and day of the fifth working day.
Input : 10/2013
Output : 7th October 2013, Monday.
I am planning to take one full backup and Transactional Log backups for every month ..as i will be making the changes in database only once in a month .
And I am aware of that in case of disaster i need to restore database with all the Transactional Log backups . My Plan is to have Transactional log backups for 5 Years and after 5 years i would be taking a full backup .
So should I need to take any other precautions or concerns with this approach.?
How to convert 6 digit number to mm/yyyy.
View 5 Replies View RelatedI want the count of orders of a particular table on weekly basis i.e if date given to me is 10/3/2014 then my output should be count of orders from date 10/3/2014 to 09/3/2014(one week) then count of orders from 2/3/2014 to 08/3/2014(another week) and then from 24/2/2014 to 01/3/2014(another week).....
View 5 Replies View RelatedIn SQL sERVER 2008, I have two fields - Depatment and Employees. I need to sort the result set by employee number ascending order, with following exception
1)when department number = 50 - the preferred order is Employee # - 573 followed by 551-572 (employee # belong to Dept 50 = 551-573)
2)When Department number = 20 – the preferred sort order is Employee # 213-220, followed by Employee # 201-213 (employee # belong to Dept 20 = 201-220)
How shall I achieve this?
I want to group the following data attached by CUSTACCT, YEAR, PERIOD(month). As you can see in the attached example, I see duplicates for late fees and Exchange fees but none for FLIGHT_HRS. I want to see the data grouped by CUSTACCT by YEAR then by PERIOD for each of these: FLIGHTHRSSUM, LATEFEESUM, EXCHFEESUM without duplicates.
Is this possible?
This is my current SQL
SELECT dbo.VIEW_FLIGHT_HRS_TOT.YEAR AS FLTHRSYEAR, dbo.VIEW_FLIGHT_HRS_TOT.PERIOD AS FLTHRSPERIOD, dbo.VIEW_FLIGHT_HRS_TOT.FLIGHTHRSSUM,
dbo.VIEW_LATE_FEES_TOT.PERIOD AS LATEFEEPERIOD, dbo.VIEW_LATE_FEES_TOT.YEAR AS LATEFEEYEAR, dbo.VIEW_LATE_FEES_TOT.LATEFEESUM,
[Code] ....
I have one table which contines 3 columns they are
1.emp
2.monthyear
3.amount
in that table i am having 6 rows like below.
emp monthyear amount
1 102013 1000
2 102013 1000
1 112013 1000
1 112013 1000
2 122013 1000
2 122013 0000
i want a total on group by condition on each employee. which will have the data from NOV to dec 2013 only.
out put should be like this
1 2000
2 1000
We have an application that can spit out our Facility Structure into the following format. However, we have a need to take that data and feed it into another system. However, as you can see it is organized in a PARENT / CHILD structure.
Indent Level Key:
1System
2Facility
3Service Line
4Division
5Department
If you notice, each additional row is the child row to its parent above it as long as the Indent Level Key continues to increment. Once we start going back up the structure we essentially have a new Division/Service Line/or Facility depending on how far back we jump on the Indent Level for that next row. Here is some sample data.
--===== If the test table already exists, drop it
IF OBJECT_ID('TempDB..#Test_Data','U') IS NOT NULL
DROP TABLE #Test_Data
--===== Create the test table with
SET ANSI_NULLS ON
[Code] .....
We have 4 servers : Server 1 , Server 2 and HA , DR servers.
I designed 2 Plans to get HA support for my databases .
Which of them are better , And is there any problem in my design ?
I have SSIS 2012 Enterprise, using catalog deployment and have more that 50 environment variables for connection to databases across my enterprise.
The problem when i go to configure the packages after deployment and pick the proper env variables, that are not sorted, so i have to browse all entries in order to find the proper entry in environment variables.
I'm using SQL Server 2012 and I need to run a query against my database that will output the difference between 2 dates (namely, DateOfArrival and DateOfDeparture) into the correct month column in the output.
Both DateOfArrival and DateOfDeparture are in the same table (let's say GuestStay). I will also need some other fields from this table and do some joins on some other tables but I will simplify things so as to solve my main problem here. Let's say the fields needed from the GuestStay table looks like below:
I need my query to output in the following format:
How to write this query?
I'm trying to do the following and haven't been able to figure it out.
Say there's a table with these records:
Col1 Col2 Col3
a b c
a b c
a b d
e f g
e f g
I want to generate a number that represents the groups of columns like this:
Col1 Col2 Col3 MyNumber
a b c 1
a b c 1
a b d 2
e f g 3
e f g 3
So that each grouping gets its own identifier. I've tried this:
SELECT Col1, Col2, Col3
row_number() OVER (PARTITION BY Col1, Col2, Col3
ORDER BY Col1, Col2, Col3) AS MyNumber
FROM MyTable
But I get this:
Col1 Col2 Col3 MyNumber
a b c 1
a b c 2
a b d 1
e f g 1
e f g 2
I got this ERD and I have 2 questions about it:
What is the minimum number of times that R2 can appear in the connection? and what is the maximum?
A=5
B=5
C1=1
C2=2
I have a function which generate random number in range :
CREATE FUNCTION Func_Rand
(
@MAX BIGINT ,
@UNIQID UNIQUEIDENTIFIER
)
RETURNS BIGINT
AS
BEGIN
RETURN (ABS(CHECKSUM(@UNIQID)) % @MAX)+1
END
GO
If you run this script you always get result between 1 and 4
SELECT dbo.Func_Rand(4, NEWID())
BUT, in this script sometimes have no result !!!!!!! why??
SELECT 'aa'
WHERE dbo.Func_Rand(4, NEWID()) IN ( 1, 2, 3, 4 )
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
Is there any way to trigger an event (ie. call a procedure/job/etc.) when/if an AG fails over?
Not looking to use agent alerts.
Recently after turning on trace I restarted the sql services on a box which is configured for automatic failover availability groups. The ag has not failed over to other node. The other node was in resolving state. When the restarted server is back, the AG went back to that server. I checked the sys.availability groups field for failover property failure condition level, it's set to 1 which means service restarts should initiate the failover.
View 3 Replies View RelatedWhat I asked for: Three Windows Server 2012 R2 machines with independent storage running a SQL Server 2014 AlwaysOn Availability Group. DB1 would be the primary, DB2 would be a synchronous replica, and DB3 would be a remote asynchronous replica.
What I was given: a two-node Windows Server 2012 R2 WSFC to run SQL Server 2014 Enterprise with shared storage and a third (remote) Windows Server 2012 R2 machine with independent storage, also with SQL Server 2014 Enterprise, to host an AlwaysOn Availability Groups asynchronous replica.
DB1 and DB2 (as Cluster1) share an E: drive. The remote DB3 has its own E: drive. Initially, DB3’s E: drive was claimed as a cluster resource and I couldn’t even see it. I’ve had several ugly days trying to make this work and have temporarily given up, installing DB3 as a standalone SQL Server that is no longer part of the WSFC and pointing everything towards that (it was originally a third node in the WSFC).
Is it possible to create an AlwaysOn Availability Group with nested clusters (i.e. create the AOAG with Cluster1 and DB3 and somehow ignore the individual nodes that comprise Cluster1)?
create table #temp_Alpha_num (
[uniquenum] [int] Not NULL,
[Somenum] [int] Not NULL,
)
Insert into #temp_Alpha_num(uniquenum,Somenum)
values
(1,121)
[Code] ....
For the somenum column I need to add alphabets to make them unique. for example
121a,121b,121c....121z,121aa,121ab,101a and so on...
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2));
;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0)
insert @amts
select lineID, sum(rc), sum(copay), sum(deduct),
case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end
from claimln
where status is null
and lineID not in (select claimID from unpaid)
group by lineID
it's like there's some massively recursive process going on?
I have four columns in my table, the first one is the identity column
col1 Col1 col2 col3
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
When, I see, @@@ sign in my col4, I need to restart the col3 from 1 again so it will look like this
col1 Col2 col3 col4
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
6 12 1 This is another test1
7 12 2 This is another Test2
Is it possible to do that?
I have a family table and would like to group all related members under the same familyID. This is a replication of existing business data, 14,000 rows. The familyID can be randomly assigned to any group, its sole purpose is to group the names:
declare @tv table (member varchar(255), relatedTo varchar(255))
insert into @tv
select 'John', 'Mary'union all
select 'Mary', 'Jessica' union all
select 'Peter', 'Albert' union all
[Code] ....
I would like my result to look like this:
familyID Name
1 John
1 Mary
1 Jessica
1 Fred
2 Peter
2 Albert
2 Nancy
3 Abby
4 Joe
4 Frank
I have a user, who is trying log into the server, but everytime he gets this error saying something about the Group policy denies him access.
This user needs access and i'm trying to understand how to grant it to him.
I have been looking into how i can access the group policy editor, but the farthest i can get is the Local group policy editor. How do i make sure this specific user has access?
Where can I find dates and times to when an availability group was moved outside of the SQL error log?
View 1 Replies View RelatedI am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
I have duplicate records in table.I need to count duplicate records based upon Account number and count will be stored in a variable.i need to check whether count > 0 or not in stored procedure.I have used below query.It is not working.
SELECT @_Stat_Count= count(*),L1.AcctNo,L1.ReceivedFileID from Legacy L1,Legacy L2,ReceivedFiles where L1.ReceivedFileID = ReceivedFiles.ReceivedFileID
and L1.AcctNo=L2.AcctNo group by L1.AcctNo,L1.ReceivedFileID having Count(*)> 0
IF (@_Stat_Count >0)
BEGIN
SELECT @Status = status_cd from status-table where status_id = 10
END
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I would like to know how to split the phone number into two columns based on - symbol Dynamically.
for example
Phone Number
123-12323
1234-1222
so output should be
code number
123 12323
1234 1222