SQL Server 2014 :: ERD - Minimum Number Of Times That R2 Can Appear In Connection?
Dec 14, 2014
I got this ERD and I have 2 questions about it:
What is the minimum number of times that R2 can appear in the connection? and what is the maximum?
A=5
B=5
C1=1
C2=2
View 1 Replies
ADVERTISEMENT
Jan 30, 2015
I am trying to create a logon trigger. As I am testing this, I discovered that each time I do a connection, I get 19 rows, inserted into my audit table. I ran profiler, and I see it is going through the logon trigger multiple times, for a single connection. So, what am I doing wrong? The code is fairly simplistic, and the profiler doesn't give a clue, as to what is going on. When I look at the output, I see the spid for the first couple of connections are different, then a spid, that is different from those 2 is in the next 17 rows. But, when I do an sp_who2, that spid does not exist.
This issue was noticed on a 2012 version, that I was first testing on, then had the same issue on a 2008 R2. I am currently testing on a 2014 version, that is doing the same thing. Is the logon trigger itself, firing, and causing this?
I also tried using the After Logon option, and got the same issue.
Here is the code:
CREATE TRIGGER LogonAuditTrigger
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN
DECLARE @Body NVARCHAR(2000),
[code]....
View 0 Replies
View Related
Jan 28, 2015
I am trying to use change data capture to load the data into the secondtable from table 1 which is coming from UI.
What will be the minimum latency??? Can we use incase of latency less than 5 seconds.
View 1 Replies
View Related
Oct 27, 2015
I've got the below and have several variation and still cant seem to find a perfect way to query the server to bring back that last full backup per db. I'm shopwing mutilple records in the backup set db w/ type = 'D'. I look online and type D = Database. Which i assumed it meant full database backup. Apparently not. Try running the below on one of your full databases. Then check to see if the date is actually the last backup date.
DECLARE @db_name VARCHAR(100)
SELECT @db_name = DB_NAME()
-- Get Backup History for required database
SELECT TOP ( 30 ) s.database_name,
m.physical_device_name,
[Code] ....
View 9 Replies
View Related
Jun 20, 2007
Hi. I have an 'Attendance' table like this:PIN Year Category Days1 2006 Authorized 11 2006 Available 21 2006 Personal 32 2006 Authorized 42 2006 Available 52 2006 Personal 63 2006 Authorized 73 2006 Available 83 2006 Personal 94 2006 Authorized 104 2006 Available 114 2006 Personal 121 2007 Authorized 131 2007 Available 141 2007 Personal 152 2007 Authorized 162 2007 Available 172 2007 Personal 183 2007 Authorized 193 2007 Available 203 2007 Personal 214 2007 Authorized 224 2007 Available 234 2007 Personal 24I need to sum the days by PIN, Year and Category (that's easy...) ANDobtain a layout like this:PIN Auth 2006 Avail 2006 Pers 2006 Auth2007 Avail 2007 Pers 20071 1 23 13 14 152 4 56 16 17 183 7 89 19 20 214 10 1112 22 23 24How can I do this by queries without writing too many intermediatesteps ?What I have done is this (5 queries, 2, 3, and 4 building on top of1,and 5 building on 2, 3, 4).1 = Table1_Crosstab:TRANSFORM Sum(Table1.Days) AS SumOfDaysSELECT Table1.PIN, Table1.YearFROM Table1GROUP BY Table1.PIN, Table1.YearPIVOT Table1.Category;Then, based on that,2 = Authorized:TRANSFORM First([1 = Table1_Crosstab].Authorized) ASFirstOfAuthorizedSELECT [1 = Table1_Crosstab].PINFROM [1 = Table1_Crosstab]GROUP BY [1 = Table1_Crosstab].PINPIVOT [1 = Table1_Crosstab].Year;3 = Available:TRANSFORM First([1 = Table1_Crosstab].Available) AS FirstOfAvailableSELECT [1 = Table1_Crosstab].PINFROM [1 = Table1_Crosstab]GROUP BY [1 = Table1_Crosstab].PINPIVOT [1 = Table1_Crosstab].Year;and4 = Personal:TRANSFORM First([1 = Table1_Crosstab].Personal) AS FirstOfPersonalSELECT [1 = Table1_Crosstab].PINFROM [1 = Table1_Crosstab]GROUP BY [1 = Table1_Crosstab].PINPIVOT [1 = Table1_Crosstab].Year;and finally5 = AllSELECT [2 = Authorized].PIN, [2 = Authorized].[2006] AS [Auth 2006],[3 = Available].[2006] AS [Avail 2006], [4 = Personal].[2006] AS[Pers2006], [2 = Authorized].[2007] AS [Auth 2007], [3 = Available].[2007]AS [Avail 2007], [4 = Personal].[2007] AS [Pers 2007]FROM ([2 = Authorized] INNER JOIN [3 = Available] ON [2 =Authorized].PIN = [3 = Available].PIN) INNER JOIN [4 = Personal] ON[3= Available].PIN = [4 = Personal].PIN;It works, but... I am sure that this is an awkward way of doing it.Isthere any other, more elegant, way, please ? Besides, what if I hadnot 3, but 15 categories, for example ????Thanks a lot for your time reading this, Alex
View 4 Replies
View Related
Apr 11, 2015
In my staging table I am having data like below
ABL¯ABL¯0¯0¯ABL¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
ABL¯ABQ¯480¯825¯DLS¯AMA¯ABQ¯ ¯ ¯ ¯ ¯ ¯ ¯
ABL¯ACD¯808¯1255¯DLS¯ELP¯TCS¯PHX¯ACD¯ ¯ ¯ ¯ ¯
ABL¯ADE¯1256¯471¯DLS¯AMA¯ABQ¯LSV¯ADE¯ ¯ ¯ ¯ ¯
ABL¯AFT¯1140¯1744¯DLS¯LAX¯FON¯AFT¯ ¯ ¯ ¯ ¯ ¯
ABL¯AHM¯1178¯1637¯DLS¯LAX¯AHM¯ ¯ ¯ ¯ ¯ ¯ ¯
ABL¯ALB¯1769¯1825¯DLS¯WIL¯ALB¯ ¯ ¯ ¯ ¯ ¯ ¯
ABL¯ALE¯1041¯1150¯DLS¯ALE¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
Now I want to find the Number of times a '¯'character appears in a string. I should get output 14
View 2 Replies
View Related
Mar 21, 2015
I'm trying to quantify the number of times folks use SQL Server Management Studio to change client data in one of our production databases. Does SQL Server keep this statistic? How do I get to this data?
View 6 Replies
View Related
May 2, 2008
I have an application that connects to an externally hosted SQL server. The first time i go to the web page, the application times out. I can then immediately refresh and everything works. After a period of about 20-30 minutes of inactivity, the issue repeats itself.
This is very hard to trouble shoot.
Anyone know what to do?
View 2 Replies
View Related
Aug 6, 2014
I'm trying to do the following and haven't been able to figure it out.
Say there's a table with these records:
Col1 Col2 Col3
a b c
a b c
a b d
e f g
e f g
I want to generate a number that represents the groups of columns like this:
Col1 Col2 Col3 MyNumber
a b c 1
a b c 1
a b d 2
e f g 3
e f g 3
So that each grouping gets its own identifier. I've tried this:
SELECT Col1, Col2, Col3
row_number() OVER (PARTITION BY Col1, Col2, Col3
ORDER BY Col1, Col2, Col3) AS MyNumber
FROM MyTable
But I get this:
Col1 Col2 Col3 MyNumber
a b c 1
a b c 2
a b d 1
e f g 1
e f g 2
View 9 Replies
View Related
Dec 19, 2014
Sample Data
SET NOCOUNT ON;
USE tempdb;
GO
CREATE TABLE CheckRegistry (
CheckNumber smallint,
[code]...
How can I get the result orderd by the month number?
View 3 Replies
View Related
Oct 3, 2015
I have a function which generate random number in range :
CREATE FUNCTION Func_Rand
(
@MAX BIGINT ,
@UNIQID UNIQUEIDENTIFIER
)
RETURNS BIGINT
AS
BEGIN
RETURN (ABS(CHECKSUM(@UNIQID)) % @MAX)+1
END
GO
If you run this script you always get result between 1 and 4
SELECT dbo.Func_Rand(4, NEWID())
BUT, in this script sometimes have no result !!!!!!! why??
SELECT 'aa'
WHERE dbo.Func_Rand(4, NEWID()) IN ( 1, 2, 3, 4 )
View 2 Replies
View Related
Oct 7, 2015
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
View 6 Replies
View Related
Nov 25, 2013
create table #temp_Alpha_num (
[uniquenum] [int] Not NULL,
[Somenum] [int] Not NULL,
)
Insert into #temp_Alpha_num(uniquenum,Somenum)
values
(1,121)
[Code] ....
For the somenum column I need to add alphabets to make them unique. for example
121a,121b,121c....121z,121aa,121ab,101a and so on...
View 4 Replies
View Related
Apr 7, 2015
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
View 9 Replies
View Related
Jul 6, 2015
I need to group up the records randomly into ‘n’ number of batches. That can be done by NTILE, but I want group up similar records in single group.
Say for example, following is the list of records I have in my table which I want to group into 5 batches
A123
A124
A124
A123
A127
After Ntile I will get the below,
Desired output is, Need output like Ntile but all same id should reside in single batch
Even if I n=5, maximum possibility of batches are 3 only.
View 2 Replies
View Related
Jul 6, 2015
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2));
;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0)
insert @amts
select lineID, sum(rc), sum(copay), sum(deduct),
case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end
from claimln
where status is null
and lineID not in (select claimID from unpaid)
group by lineID
it's like there's some massively recursive process going on?
View 5 Replies
View Related
Jul 15, 2015
I have four columns in my table, the first one is the identity column
col1 Col1 col2 col3
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
When, I see, @@@ sign in my col4, I need to restart the col3 from 1 again so it will look like this
col1 Col2 col3 col4
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
6 12 1 This is another test1
7 12 2 This is another Test2
Is it possible to do that?
View 8 Replies
View Related
Apr 24, 2014
I need to write a SQL to find number of hours between a begin time and end time. The fields are varchar. There are several date functions in sql, but I am not able to figure out how to get the hours between two times that is not in date format.
BEGIN_TIME END_TIME
0900----- 1500
1000----- 1700
1000----- 1230
0930----- 1030
View 3 Replies
View Related
Sep 15, 2006
Is there a way to get the number of occurances of a word within a specific row?
I want to query a dataset and have a column act as the number of occurances of a word within that row. I also need to use Full-Text searching... Is this possible?
View 6 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Jul 16, 2014
I have duplicate records in table.I need to count duplicate records based upon Account number and count will be stored in a variable.i need to check whether count > 0 or not in stored procedure.I have used below query.It is not working.
SELECT @_Stat_Count= count(*),L1.AcctNo,L1.ReceivedFileID from Legacy L1,Legacy L2,ReceivedFiles where L1.ReceivedFileID = ReceivedFiles.ReceivedFileID
and L1.AcctNo=L2.AcctNo group by L1.AcctNo,L1.ReceivedFileID having Count(*)> 0
IF (@_Stat_Count >0)
BEGIN
SELECT @Status = status_cd from status-table where status_id = 10
END
View 9 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Jul 16, 2015
I would like to know how to split the phone number into two columns based on - symbol Dynamically.
for example
Phone Number
123-12323
1234-1222
so output should be
code number
123 12323
1234 1222
View 1 Replies
View Related
Sep 9, 2015
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
View 3 Replies
View Related
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Jul 10, 2013
What i want to do is count how many times the [Omnipay_Account] column is populated with a number per parentid.
In the [Omnipay_Account] column you can either have a null or a 15 digit number
Idea result layout would be three columns these columns would be
Every row, would need to be grouped via the parentid number
ParentId Omnipay MSIP
My query is
SELECT
[ParentID]
,[Omnipay_Account]
FROM [FDMS].[dbo].[Dim_Outlet]
View 3 Replies
View Related
Sep 14, 2007
I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.
DELETE FROM tblLog
I just get a message about the server timing out, what can I do?
Thanks
View 5 Replies
View Related
Apr 3, 2015
Basically the question is, which number should I pick?
View 4 Replies
View Related
May 3, 2015
ID A B C AVG
------------------------
1 08 09 10 -
------------------------
2 10 25 26 -
------------------------
3 09 15 16 -
------------------------
I want to calculate the average of the larges two number from the column A,B & C for particular identity and store that average in the AVG column....
View 9 Replies
View Related
Aug 25, 2015
There are 3 columns in the result set - part num, Qty and MO num. Each MO num has part numbers.So there might be same part numbers in MO's. Each part num has qty. So, if I group by part num, I get Qty.
1.There are duplicates of part.num and I want to remove duplicates and add quantities of those duplicates into one single quantity. For example, xxxx is a part num, then xxxx=1,xxxx=3,xxxx=5. I want xxxx=9. I want to sum those. Another question is, each MO has a user. I want to join user and MO num in MO.
Heres the code,
part.num , (woitem.qtytarget/wo.qtytarget) AS woitemqty,
(SELECT LIST(wo.num, ',') FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id WHERE moitem.moid = mo.id) AS wonums FROM mo INNER JOIN moitem ON mo.id = moitem.moid
LEFT JOIN wo ON moitem.id = wo.moitemid
LEFT JOIN woitem ON wo.id = woitem.woid AND woitem.typeid = 10 LEFT JOIN (Select sum(woitem.qtytarget) as labor, woitem.woid, uom.code as uom from woitem JOIN part on woitem.partid = part.id and part.typeid = 21 JOIN uom on woitem.uomid = uom.id group by 2,3) as labor on wo.id = labor.woid LEFT JOIN part ON woitem.partid = part.id
View 1 Replies
View Related
Jul 28, 2015
Some logic for calculating the count for changed values.
E.g. :--
ID VAL
1 10
1 20
1 20
1 20
1 10
1 10
1 10
1 20
1 10
Here in the above table my value changed from 10 to 20 and so.., so Val column value changed 4 times, so I need the result like :
id count
1 4
View 7 Replies
View Related
Jun 22, 2015
How is the best way to make a function for summing an arbitrary number of times values (table parm?)- I 've read it's necessary to convert to seconds, sum then convert back, but Im' wondering if there's an alternative.
Here's the example I want to sum:
00:02:01:30
00:01:28:10
00:01:01:50
00:06:50:30
00:00:01:50
View 8 Replies
View Related