SQL Server 2014 :: Duplicates Of Part-number To Be Summed And Add Another Column Called User?
Aug 25, 2015
There are 3 columns in the result set - part num, Qty and MO num. Each MO num has part numbers.So there might be same part numbers in MO's. Each part num has qty. So, if I group by part num, I get Qty.
1.There are duplicates of part.num and I want to remove duplicates and add quantities of those duplicates into one single quantity. For example, xxxx is a part num, then xxxx=1,xxxx=3,xxxx=5. I want xxxx=9. I want to sum those. Another question is, each MO has a user. I want to join user and MO num in MO.
Heres the code,
part.num , (woitem.qtytarget/wo.qtytarget) AS woitemqty,
(SELECT LIST(wo.num, ',') FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id WHERE moitem.moid = mo.id) AS wonums FROM mo INNER JOIN moitem ON mo.id = moitem.moid
LEFT JOIN wo ON moitem.id = wo.moitemid
LEFT JOIN woitem ON wo.id = woitem.woid AND woitem.typeid = 10 LEFT JOIN (Select sum(woitem.qtytarget) as labor, woitem.woid, uom.code as uom from woitem JOIN part on woitem.partid = part.id and part.typeid = 21 JOIN uom on woitem.uomid = uom.id group by 2,3) as labor on wo.id = labor.woid LEFT JOIN part ON woitem.partid = part.id
There are 3 columns in the result set - part num, Qty and MO num. Each MO num has part numbers.So there might be same part numbers in MO's. Each part num has qty. So, if I group by part num, I get Qty.
1.There are duplicates of part.num and I want to remove duplicates and add quantities of those duplicates into one single quantity. For example, xxxx is a part num, then xxxx=1,xxxx=3,xxxx=5. I want xxxx=9. I want to sum those. Another question is, each MO has a user. I want to join user and MO num in MO.
Heres the code,
part.num , (woitem.qtytarget/wo.qtytarget) AS woitemqty, (SELECT LIST(wo.num, ',') FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id WHERE moitem.moid = mo.id) AS wonums FROM mo INNER JOIN moitem ON mo.id = moitem.moid LEFT JOIN wo ON moitem.id = wo.moitemid LEFT JOIN woitem ON wo.id = woitem.woid AND woitem.typeid = 10 LEFT JOIN (Select sum(woitem.qtytarget) as labor, woitem.woid, uom.code as uom from woitem JOIN part on woitem.partid = part.id and part.typeid = 21 JOIN uom on woitem.uomid = uom.id group by 2,3) as labor on wo.id = labor.woid LEFT JOIN part ON woitem.partid = part.id
There are 3 columns in the result set - part num, Qty and MO num. Each MO num has part numbers.So there might be same part numbers in MO's. Each part num has qty. So, if I group by part num, I get Qty.
1.There are duplicates of part.num and I want to remove duplicates and add quantities of those duplicates into one single quantity. For example, xxxx is a part num, then xxxx=1,xxxx=3,xxxx=5. I want xxxx=9. I want to sum those. Another question is, each MO has a user. I want to join user and MO num in MO.
Heres the code,
part.num , (woitem.qtytarget/wo.qtytarget) AS woitemqty,
(SELECT LIST(wo.num, ',') FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id WHERE moitem.moid = mo.id) AS wonums FROM mo INNER JOIN moitem ON mo.id = moitem.moid
LEFT JOIN wo ON moitem.id = wo.moitemid
LEFT JOIN woitem ON wo.id = woitem.woid AND woitem.typeid = 10 LEFT JOIN (Select sum(woitem.qtytarget) as labor, woitem.woid, uom.code as uom from woitem JOIN part on woitem.partid = part.id and part.typeid = 21 JOIN uom on woitem.uomid = uom.id group by 2,3) as labor on wo.id = labor.woid LEFT JOIN part ON woitem.partid = part.id
I have four columns in my table, the first one is the identity column
col1 Col1 col2 col3
1 12 1 This is Test1 2 12 2 This is Test1 3 12 3 This is Test3 4 12 4 This is Test4 5 12 5 @@@@@
When, I see, @@@ sign in my col4, I need to restart the col3 from 1 again so it will look like this
col1 Col2 col3 col4
1 12 1 This is Test1 2 12 2 This is Test1 3 12 3 This is Test3 4 12 4 This is Test4 5 12 5 @@@@@ 6 12 1 This is another test1 7 12 2 This is another Test2
I have a table with a column named measurement decimal(18,1). If the value is 2.0, I want the stored proc to return 2 but if the value is 2.5 I want the stored proc to return 2.5. So if the value after the decimal point is 0, I only want the stored proc to return the integer portion. Is there a sql function that I can use to determine what the fraction part of the decimal value is? In c#, I can use dr["measurement "].ToString().Split(".".ToCharArray())[1] to see what the value after the decimal is.
Within the LinkingID, there are duplicates in ID1 and ID2 but just in opposite columns. I have been trying to figure out a way to remove these set based. It doesn't matter which duplicate is removed. Essentially these are just endpoints and I don't care which side they are on. The solution must recognize the duplicates and not just remove based on every 2nd row.
I have a large table of customers. I would like to add a column that contains an integer, unique to that customer. The trick is that this file contains many duplicate customers, so I want the duplicates to all have the same number between them.the numbers dont have to be sequential or anything, just like customers having the same one.
Sorry to bother you guys with what I though would be an easy task. Ihave a table in my database were I would like one of the rows toincrement a number for each row. I want the first row to start at 1000and keep on incrementing by 1 'till the end of the rows (about 2.7million rows). I though this would be a piece of cake, but I just can'tseem to find anything like it on the internet...weird.Anyways, I'm just a rookie in sql, any help would be appreciatedThanksJMT
I have some code that I need to run every quarter. I have many that are similar to this one so I wanted to input two parameters rather than searching and replacing the values. I have another stored procedure that's executed from this one that I will also parameter-ize. The problem I'm having is in embedding a parameter in the name of the called procedure (exec statement at the end of the code). I tried it as I'm showing and it errored. I tried googling but I couldn't find anything related to this. Maybe I just don't have the right keywords. what is the syntax?
CREATE PROCEDURE [dbo].[runDMQ3_2014LDLComplete] @QQ_YYYY char(7), @YYYYQQ char(8) AS begin SET NOCOUNT ON; select [provider group],provider, NPI, [01-Total Patients with DM], [02-Total DM Patients with LDL],
Every sunday, new data will be loaded from temptable to main table. I have to make sure that, duplicates does not get loaded from temptable to maintable.
For example, if last sunday a record gets loaded from temp to main. If this sunday also the same record is present then it means that is a duplicate.
The duplicate is decided on below scenario
select 'CodeChanges: ', count(*) from CodeChanges a, CodeChanges_Temp b where a.AccountNumber = b.AccountNumber and a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and a.HexaId = b.HexaId and
[Code] ...
Yesterday (Sunday) , data from temp got loaded onto maintable but with duplicates.
There is a log which just displays number of duplicates.
Yesterday the log displayed 8 duplicates found. I need to find out the 8 duplicates which got loaded yesterday and delete it off from main table.
There is a column in both tables which is 'creation date and time'. Every Sunday when the load happens this column will have that day's date .
Now i need to find out what are all the duplicates which got loaded on this sunday.
The total rows in temp table is : 363 No of duplicates present is : 8
I used below query to find out the duplicates but it is returning all the 363 rows from the maintable instead of the 8 duplicates.
Select 'CodeChanges: ', * from CodeChanges a where exists ( Select 1 from CodeChanges_Temp b where a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and
[Code] ...
Need finding the duplicate records which has creation date time as '2015-11-01 00:00:00.000' and all the above columns mentioned in the query matches.
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and TestDB had 40 VLFs and TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB: RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec). SQL Server Execution times: CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2: RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec). SQL Server Execution Times: CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count FROM sys.dm_exec_query_stats AS a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b where last_execution_time >= '2015-04-07 10:01:01.01' ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2)); ;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0) insert @amts select lineID, sum(rc), sum(copay), sum(deduct), case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end from claimln where status is null and lineID not in (select claimID from unpaid) group by lineID
it's like there's some massively recursive process going on?
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing - page life expectance becomes "terrible" - free list stall/sec increases - lazy writes/sec increases - readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine - the table has a clustered index on a identity column - there are no foreign key constraints - inserts are executed using a loop, not one big transaction - to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
I have duplicate records in table.I need to count duplicate records based upon Account number and count will be stored in a variable.i need to check whether count > 0 or not in stored procedure.I have used below query.It is not working.
SELECT @_Stat_Count= count(*),L1.AcctNo,L1.ReceivedFileID from Legacy L1,Legacy L2,ReceivedFiles where L1.ReceivedFileID = ReceivedFiles.ReceivedFileID and L1.AcctNo=L2.AcctNo group by L1.AcctNo,L1.ReceivedFileID having Count(*)> 0 IF (@_Stat_Count >0) BEGIN SELECT @Status = status_cd from status-table where status_id = 10 END
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
I need to change the first digit (0) in a number, like 05678494 and replace it with 0046 so it would look like 00465678494 when it is done. I have tried to use the REPLACE function without any success. Can anyone help me?
I have created a SharePoint external list using SQL Server, the data source is SQL Server. Is there way that I could get the logged-in SharePoint user and stored in SQL Seever?