SQL Server Admin 2014 :: Get Average Of Two Largest Number Amount Three Column For Particular Identity
May 3, 2015
ID A B C AVG
------------------------
1 08 09 10 -
------------------------
2 10 25 26 -
------------------------
3 09 15 16 -
------------------------
I want to calculate the average of the larges two number from the column A,B & C for particular identity and store that average in the AVG column....
View 9 Replies
ADVERTISEMENT
Aug 4, 2015
Background information: SQL 2014 Ent. highly volatile OLTP environment. We generate 10 - 12 GB compressed transaction log backup files every 15 minutes.
Currently - we have two-node A/P cluster residing on flash array. Need to leverage AlwaysOn to offload processing. Replica server with have Flash storage. Replica node has same CPU and memory footprint. 10GB connection between nodes. Anyone generating such large transaction log for 15/30 minute time period?
View 0 Replies
View Related
Apr 27, 2015
How you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.
[URL] ...
However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.
object_name counter_name instance_name cntr_value cntr_type
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712
I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time
((default * default[base]) + (internal * internal[base]))/(default[base] + internal[base])
Then to do the calculation over time, I'd use the changes in the calculated numerator and denominator to get the average.
Does this sound like to correct way to get this value? Is there a good way to verify?
View 2 Replies
View Related
Dec 19, 2006
Example data
CA1000
CA10001
CA10002
CA10003
CA11597
CA11603
CA1001
CA998
CA999
As you can see, CA11603 is the largest number in this list.
When I try the follow sql code,
SELECT
MAX([MyCode])
FROM
[MyTable]
WHERE (SUBSTRING([MyCode], 1, 2) = 'CA')
The largest number comes back as CA997
When I try
MAX([MyCode])
FROM
[MyTable]
WHERE [MyCode] LIKE 'CA%'
The largest number comes back as CA997
SELECT
TOP 1 (SchoolMasterCode)
FROM
SchoolMaster
WHERE (SUBSTRING(SchoolMasterCode, 1, 2) = 'CA') ORDER BY Schoolmastercode
The largest comes back as CA10001
When I try....
SELECT
TOP 1 (SchoolMasterCode)
FROM
SchoolMaster
WHERE (SUBSTRING(SchoolMasterCode, 1, 2) = 'CA')
The largest comes back as CA1278
What am I doing wrong?
View 3 Replies
View Related
Oct 7, 2015
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
View 6 Replies
View Related
Apr 7, 2015
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
View 9 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Sep 9, 2015
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
View 3 Replies
View Related
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Apr 3, 2015
Basically the question is, which number should I pick?
View 4 Replies
View Related
Aug 24, 2015
I am new in SQL Server, What to check ,what action do i need to take while adding a new column to a table.
View 5 Replies
View Related
Jun 25, 2014
I have a database it is 50 gb with hundreds of columns. I would like to choose a certain column and convert the data in it to .csv or excel file. How can I do that I am very new to MSSQL...
View 1 Replies
View Related
May 18, 2015
I would like to put a Clustered Index on a date column in a current heap, but one question/concern.This heap every month has thousands of rows deleted and even more added later. How much of an issue will this cause the Clustered Index as far as page splits? I was thinking Fill Factor of 70%.I would normally just test and still will on Dev box, but my Dev box is much smaller than production as far as power.
View 6 Replies
View Related
Jun 17, 2015
I need to encrypt some column level data in multiple tables in SQL server 2014. I've never tried encryption in SQL server 2014. How can I achieve it?
View 4 Replies
View Related
Jun 25, 2015
I am trying to implement the column encryption on one of the tables, have used the below link as the reference and got stuck at the last step.
[URL] ....
I have completed the following steps so far.
- CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘myStrongPassword’
- CREATE CERTIFICATE MyCertificateName
WITH SUBJECT = 'A label for this certificate'
- CREATE SYMMETRIC KEY MySymmetricKeyName WITH
IDENTITY_VALUE = 'a fairly secure name',
ALGORITHM = AES_256,
[Code] .....
Example by using the function
EXEC OpenKeys
-- Encrypting
SELECT Encrypt(myColumn) FROM myTable
-- Decrypting
SELECT Decrypt(myColumn) FROM myTable
When I ran the last command :
-- Decrypting
SELECT Decrypt(myColumn) FROM myTable
I get the following error :
Msg 257, Level 16, State 3, Line 2
Implicit conversion from data type nvarchar to varbinary is not allowed. Use the CONVERT function to run this query.
Where will I use the convert function, in decrypt function or in select statement?
View 9 Replies
View Related
Jun 10, 2014
I have created a stored procedure for retrieving column name, as shown below
CM_id, CM_Name, [Transaction_Month], [Transaction_Year], [Invoice raised date],[Payment Received date],[Payout date],[Payroll lock date]
Now I am trying to create a temporary table using the above generated coluimns from Stored Procedure with datatype.
View 3 Replies
View Related
Jun 3, 2015
SQL server, by-mistake I updated values of a column in a database hosted online, is there any way undo the transaction. I didn't created any backup of the database. I read that still it can be recovered through the .ldf (log file) but unable to access it. Is there anyway to get access of the Log file or is there any way to recover the data.
View 1 Replies
View Related
Aug 25, 2015
I had an existing table with lots of indexes.
As a test (fro speed) - I added a non clustered column-store index.
When I run test queries it always ignores my new column-store index. Why?
Should I remove the old indexes, leaving just the column store?
View 2 Replies
View Related
Feb 24, 2015
I have the following 2 Query's - case when Table has no Identity Column and other with identity Column . I am planning to make it to single Query .
Query 1:
SELECT @ColumnNamesWhenNoIdentity = COALESCE(@ColumnNamesWhenNoIdentity + ',', '') + Name +'= SOURCE.'+Name
FROM sys.columns WITH(NOLOCK) WHERE object_id =
(
SELECT sys.objects.object_id
FROM sys.objects WITH(NOLOCK)
INNER JOIN sys.schemas WITH(NOLOCK) ON sys.objects.schema_id = sys.schemas.schema_id
WHERE sys.objects.TYPE = 'U' AND sys.objects.Name = 'Testing1' AND sys.schemas.Name ='dbo'
)
Query2:
SELECT @ColumnNamesWhenNoIdentity = COALESCE(@ColumnNamesWhenNoIdentity + ',', '') + Name +'= SOURCE.'+Name
FROM sys.columns WITH(NOLOCK) WHERE is_identity != 1 AND object_id =
(SELECT sys.objects.object_id FROM sys.objects WITH(NOLOCK)
INNER JOIN sys.schemas WITH(NOLOCK) ON sys.objects.schema_id = sys.schemas.schema_id
WHERE sys.objects.TYPE = 'U' AND sys.objects.Name = 'Testing2' AND sys.schemas.Name ='dbo'
)
View 8 Replies
View Related
Jul 15, 2015
I have four columns in my table, the first one is the identity column
col1 Col1 col2 col3
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
When, I see, @@@ sign in my col4, I need to restart the col3 from 1 again so it will look like this
col1 Col2 col3 col4
1 12 1 This is Test1
2 12 2 This is Test1
3 12 3 This is Test3
4 12 4 This is Test4
5 12 5 @@@@@
6 12 1 This is another test1
7 12 2 This is another Test2
Is it possible to do that?
View 8 Replies
View Related
Sep 19, 2005
Ok,I just need to know how to get the last record inserted by the highestIDENTITY number. Even if the computer was rebooted and it was twoweeks ago. (Does not have to do with the session).Any help is appreciated.Thanks,Trint
View 2 Replies
View Related
Aug 25, 2015
There are 3 columns in the result set - part num, Qty and MO num. Each MO num has part numbers.So there might be same part numbers in MO's. Each part num has qty. So, if I group by part num, I get Qty.
1.There are duplicates of part.num and I want to remove duplicates and add quantities of those duplicates into one single quantity. For example, xxxx is a part num, then xxxx=1,xxxx=3,xxxx=5. I want xxxx=9. I want to sum those. Another question is, each MO has a user. I want to join user and MO num in MO.
Heres the code,
part.num , (woitem.qtytarget/wo.qtytarget) AS woitemqty,
(SELECT LIST(wo.num, ',') FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id WHERE moitem.moid = mo.id) AS wonums FROM mo INNER JOIN moitem ON mo.id = moitem.moid
LEFT JOIN wo ON moitem.id = wo.moitemid
LEFT JOIN woitem ON wo.id = woitem.woid AND woitem.typeid = 10 LEFT JOIN (Select sum(woitem.qtytarget) as labor, woitem.woid, uom.code as uom from woitem JOIN part on woitem.partid = part.id and part.typeid = 21 JOIN uom on woitem.uomid = uom.id group by 2,3) as labor on wo.id = labor.woid LEFT JOIN part ON woitem.partid = part.id
View 1 Replies
View Related
Aug 27, 2015
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
View 0 Replies
View Related
Oct 15, 2013
I have a table with product_name and introduction_date(when the product was first introduced)as columns. now i wana calculate average as below
if item is sold in previous business year(suppose 2011-12) then avg should be avg price in businessyear(2010-11), if it is newly introduced(suppose 2013-14)then avg should be of current year(2013-14).
Note:- business year Apr-march
View 1 Replies
View Related
Jun 19, 2008
Hi,
I am having problem in bulk update of a sql server table haning identity column from a datatable( has no identity column) using sqlbulkcopy. I tried several approaches, but it does not show any error nor is the table getting updated. But the identity value seems to getting increased every time.
thanks.
varun
View 6 Replies
View Related
Oct 8, 2007
Hi,
I have the following two tables:
Code:
create table RECORD
(
ID int not null,
Issue_descr varchar(256) not null,
Priority varchar(5) not null,
Status varchar(12) not null,
Date_add varchar(10) not null,
Date_due varchar(10) not null,
Date_complete varchar(10),
PName varchar(32),
primary key(ID),
foreign key(PName) references PROJECT(PName)
);
and
Code:
create table STEPS
(
ID int not null,
Num int not null,
Descr varchar(256),
Date_due varchar(10) not null,
Date_complete varchar(10),
Status varchar(12),
primary key(ID, Num),
foreign key(ID) references RECORD(ID)
);
I have set PK "ID" in table RECORD to auto identity(1,1). I have done the same for PK "num" in table STEPS.
However I am seeking this behavior in STEPS:
ID num
-- ----
19 1
19 2
19 3
20 1
20 2
21 1
but what I'm getting is PK num doesn't "reseed" or reset to 1 as "ID" changes. PK num just auto-increments regardless of ID. Is there a workaround?
Thanks.
View 2 Replies
View Related
Jul 20, 2005
I am migrating a web application I wrote from ASP to ASP.Net, and fromAccess to MS SQL server.In the Access version, I did not use the auto number for creatinginvoices and other documents, because I heard somewhere (perhapsincorrectly) that if the db was ever compacted or otherwise changed,it could change the values of the auto-numbers. Not a good thing.So I wrote a routine that, just before creating a new record, wouldlook for the highest value in the table and create the new record withthe next number.So my question is, am I safe in assuming that in MS SQL that I can seta starting number for the next, let's say, invoice and that newnumbers will be issued in sequence, and that these numbers will neverchange? What happens if an invoice is deleted? is the number goneforever? Just wondering how others deal with these issues...thanks.Larry- - - - - - - - - - - - - - - - - -"Forget it, Jake. It's Chinatown."
View 2 Replies
View Related
Jan 24, 2005
Hi ,,
How to write the Sql Query to return the next generated Identity from the Sql server database.
View 1 Replies
View Related
Jun 26, 2006
Hi. I am trying to figure out the code for sorting a manual (non-identity) number column in my table. the purpose is to
show the user's pictures in perfect order (1,2,3,4,5,6...).
The Jist of my problem... When a user first inserts six pictures, he gets:
|1|
|2|
|3|
|4|
|5|
|6|
All is good. But, say he deletes picture |3|. Now the list order looks like this:
|1|
|2|
<- |3| is removed
|4|
|5|
|6|
And, then he inserts two more pictures, now he his this:
|1|
|2|
|4|
|5|
|6|
|7| <- |7| & |8| are added
|8|
What i want to acheive is a "reshuffling" of the number order every time a picture is removed. So, when |3| is removed, |4| becomes |3|, |5| becomes |4| and so on. There should never be a gap in the order.
I am new to stored procedures, and have been trying to figure this out. Below is my guesswork:
Code:
ALTER PROCEDURE dbo.sp_NewPersonalPic
(
@photo_name VARCHAR(50) = NULL,
@photo_location VARCHAR(100) = NULL,
@photo_size VARCHAR(50) = NULL,
@user_name VARCHAR(50) = NULL,
@photo_caption VARCHAR(150) = NULL,
@photo_default BIT = NULL,
@photo_private BIT = NULL,
@photo_number INTEGER = NULL,
@photo_date DATETIME = NULL
)
AS
BEGIN
SELECT @photo_date = CONVERT(DATETIME,convert(char(26), getdate(), 109))
END
BEGIN
SET @photo_number = 1
SELECT
@photo_number = (
SELECT COUNT(*)
FROM dbo.PersonalPhotos b
WHERE
a.photo_date < b.photo_date
)
FROM
dbo.PersonalPhotos a
ORDER BY
a.photo_date
END
BEGIN
My thinking is that it would be a safe bet to use the "photo_date" column as a litmus for my "photo_number" column (ie, the most recent record inserted by the user will always be at a later date than the previously inserted record). So:
photo_number photo_date
|1| 2006-06-26 21:43:36.653
|2| 2006-06-26 21:43:50.000
|3| 2006-06-26 21:45:25.217
|4| 2006-06-26 21:45:33.763
|5| 2006-06-26 22:39:42.670
|6| 2006-06-26 22:39:49.200
If |3| is removed above, the numbers are reordered based on the time of entry sequence.
Any suggestions on how to acheive this in my stored procedure? Currenly, i get the correct order, but it goes crazy when i delete and add.
Thanks and sorry for the verbose post.
View 5 Replies
View Related
Jun 22, 2015
I have got this matrix and I am trying to calculate the average amount of working days in a month. At the moment, I have divided the total number of jobs by 21 for every month which is a hard coded value. However, I am not sure how to retrieve this value dynamically. Is there any formula that can find out the working days?
View 7 Replies
View Related
Jul 18, 2012
I am trying to retrieve all rows with the largest value in a particular column. The largest value could return many rows for a particular users. Here is what I have thus far.
SELECT DISTINCT
ID, NAME, FOP, ACCT, CTNUM, ENDDATE, DEBIT, CREDIT, TRANSACTION_DATE, EXPORTED, CALENDAR_YEAR, FISCAL_YEAR, PAYROLL_IDENTIFIER,
PAYROLL_NUMBER, [EARN-SEQNO], EVENT_SEQUENCE_NUMBER
FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY ID, ACCT, PAYROLL_NUMBER,EVENT_SEQUENCE_NUMBER
ORDER BY EVENT_SEQUENCE_NUMBER DESC) AS RN
FROM PAYROLLYEAREND ) s
WHERE RN = 1 AND ID = '16443' AND PAYROLL_NUMBER ='7'
In the above example, the EVENT_SEQUENCE_NUMBER is populated with values from 0 to 12. Could vary per user and PAYROLL_NUMBER. The query above returns 48 rows. However, all I want are the rows where EVENT_SEQUENCE_NUMBER is equal to the highest, which is in the above example is 12. The result would be 29 rows. The where clause is not part of overall query. Just isolating on one ID.
View 2 Replies
View Related