I'm spec up a new server At the minimum I'll specify a quad core cpu. What would be the best way to utilize the cores? Assign a core to specific processes (tempdb?) or let sql server figure out the best way to used the processors it finds?
Yesterday I was looking to the processor usage in the Task Manager of Windows NT when a script of mine was running. The script was an InfoPump Script; which is a tool from the DecisionBase suite from CA (was previously owned by Platinum). This script contains SQL statements that select data from several tables and stores the result into another table. The SQL code used for this looks fine to me. The query was running on a Compaq Proliant 5500 with 4 500 Mhz Xeon processors, 1 GB RAM, NT Server 4, SP 5, RAID 5. The SQL Server is configured to use all resources and SQL has normal priority on NT. When the select part was running al four processors were used for about 75% and when the store happens only 1 processor is used for 100%. Why is the store not spread over all four processors? It only uses one processor and it seems to be a bottleneck.
We've had a problem for a few months now that has completely stumped us. We are running a heavily cursored massive data manipulation process on a 32 bit SQL Server instance running on a virtual machine, running ontop of VMWare, with the following specs
Processors: 2x2674MHz processors Memory: 4GB RAID 10 disk config
When we run our process on this machine, in total it runs in 30 hours.
When this process is run on another 32 bit server with the following specs
Processors: 8x3658MHx processors Memory: 8 GB SAN w/ RAID 5 disk config
It runs 25% slower
But here is the real kicker. When this process is run on a 64 bit server with the following specs
Processors: 8x3658MHz processors Memory: 8 GB SAN w/ RAID 5 disk config
It runs 75% slower.
This process consists solely of stored procedures written in TSQL. The weird thing is that on our smaller server, the CPUs' % utilization are evenly balanced (at 20-30%) when this large data manipulation process is running. However on the bigger servers, SQL Server latches onto a single processor and doesn't load balance across other processors. Such that what we're seeing is that only one processor out of the eight will be utilized and it will be throttled at 90% while the other 7 are at zero.
The default configuration settings in all three places.
Has anyone ever seen any behavior like this, where only one processor gets used by SQL Server during processing? Granted our processes are single threaded b/c they are using cursors but, it seems that the single thread shouldn't be restricted to one processor.
I'm transferring data from excel to SQL Server database. The columns in excel may be name different each time or some columns are missing, they are optional so is ok. So i'm thinking of having a table that define the columns in the excel and what column do the map to to SQL Server table.
NOW, is it possible to use that table and read the excel file with SSIS and transfer data using mapping defined in the "mapping" table? If so, what approach can i use?
We are planning to setup SQL Express on a windows os clustered environment and trying to shoot for a active-passive configuration. We know that SQL server 2005 has this ability and will be leveraging that for mission critial production applications, however there are several other apps that we intend to use internally and would like to leverage windows clustering. Has anyone done this?
- Should we share the INSTALLSQLDATADIR? If yes, then how to specify the SQL Express installtion on cluster node#2 that, "hey..utilized this dir as the data dir".
- Or Should we not worry about it, and only the databases that we create for our apps be on the quorum/clustered resource drive?
i have a nightly job (SSIS Package) scheduled using MS. The package loads data from the OLTP db to the warehouse. The server has 256GB memory and out of which 211GB is free.
the job runs w/o any problems but some times it fails with the following error"DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "The statement has been terminated.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint '<var>PrimaryKeyName</var>'. Cannot insert duplicate key in object '<var>TableName</var>'.".
When i researched this error i found out that its because of the memory issue. we have 222GB free memory and how come this is possible. Is there a way in the package or anywhere else where i can specify how much (percentage) of the memory that the SSIS package should use (something like SSRS threshold levelp).
Hi all, I am using sql server 7 i am having 4 processors in development server. How i have to allocate those servers, means all 4 do i need to allocate to sql only or 3 processors. same like 8 processors in production, how many processors to sql.. pls tell me the way..and how..
At some point, my 4 processors began a cycle of Peaks and Valleys. AFter stopping ALL processes using the SQL server the processors were still doing this Max, then none, Max, then none.
I think the oddest part of this is that all 4 processoers were Exactly the same Peaks/Valleys from my Workstation (perfmon).
Anone had their processors do this? HELP PLEASE.. .
While installing SQL2k I selected a 4 processor liscense aggreement. The server only has two. Does anybody know of a technial reason I should change it back? Can it be done without reinstalling?
We have a 200MHz Pentium Pro based machine, with 128MB RAM running SQL Server 6.5. Because of performance issues, we are contemplating an upgrade to a dual 200MHz Pentium Pro processor with 256MB RAM. However, the vendor we are dealing with has suggested an upgrade to a single Pentiun II/333MHz first, and if this still causes problems, then to a dual P II/333MHz. Does anyone have any suggestions from similar upgrades that they may have undergone? We have 72MB allocated to SQL Server.
Hi,I have an application where I need to find out about the followinginformation regarding SQL server:Processors enabledi.Threads allocatedii.PriorityCan somebody throw some light on this. How are the processors relatedto the threads running and the priority is w.r.t. what?Thanks,Verve.
Can I install SQL Server on a machine and use less than the # of processor on the machine. In a UNIX world, I'd call it LPARing with Oracle and AIX, and they only let me do this with Enterprise Edition. With Windows, I think the only way is using virtual machines and attaching processors to them? Do any vendors offering LPARing? Can I take any edition of SQL Server and subcapacity price so that I only pay for the processors I'm using?
What about SS Express? It only scheds to a single core - so could I put that on a larger machine?
Is it possible in SQL Server 2005 to limit the number of processors used? For cost reasons, we are consolidating servers and want to start running SQL Server 2005 on one of our dual-processor Win2K3 machines instead of the standalone machine it's currently running on. Because we have about 75 users, it's only cost effective to purchase a processor license (vs. a server license with CALs). But right now we only need and can only afford a single processor license, not two. So...
Is there any way in 2005 to limit the number of processors used so that we only need to purchase one processor license? I know in 2000 you could set this on the "Processor" tab of the "SQL Server Properties" dialog. In 2005, is this accomplished by unchecking the "Processor Affinity" and "I/O Affinity" checkboxes for processor #2 on the "Processors" page of the "Server Properties" dialog? If I uncheck these two options does that fully disable SQL Server 2005 from accessing the second processor in any way? From things I've read I can't tell if it restricts access to the second processor completely or if it just places some limitations on the ways it accesses the second processor. Also, the licensing information for SQL Server 2005 leads me to believe that if you are going down the "processor licensing" route that you have to buy a processor license for every processor that the OS itself has access to and not just what processors SQL Server has access to. I thought I understood that in SQL Server 2000 the licensing information did allow you to buy a processor license just for each processor that SQL Server 2000 had access to, but has that changed for 2005?
Hope someone can provide some clarification on limiting processor access and the licensing implications for SQL Server 2005.
Hi,Is there a reason why we have to pay more for licensing for a differentkind of processor?Why are we not charged for the Hyperthreading on some processors also.If Oracle is really conserned about the low end business market (smalland medium), then they should drop their attitude on Dual Coreprocessors.If they start charging as if it was a normal processor, and ask thenormal price, then they would get more of this market coming in.As long as Oracle keeps on having the attitude of charging more,because Intel or some other cpu vendor decided to mprove theirprocessors because of overheating problems, I will have the attitudethat I will keep on reoccomending alternatives for Orcle like Mysql /Postgre sql / Sybase, etc to the small/medium sector.Microsoft's pricing model on double core processors suddenly soundallot better.Oracle are shooting themselves in the foot! Or am I the only personfeeling this way?Shaun O'Reilly
When Standard Edition says it supports 4 processors, is this just the physical processor or do we have to factor in multiple cores?
If SE supports 4 physical quad-core processors, is it written to optimally utilize the quad-core technology or would I be better off using Enterprise Edition?
A Visual C++ realtime system {NT4 (sp3) and SQL 6.5 (no sp)} on pentium pro machines perform as expected, when the Pentium Pro workstations were replaced with Pentium II machines, a significant performance impact was observed. From a cold-boot of the system (1 server (Pentium Pro HP NetServer, 2 workstations (Dell PEDGE2300) and 5 PCs) the response time from SQL Server was dramatically reduced (on 1 workstation) after running a volume test. Example, when the problem was found: a simple query run from the server would take milliseconds and the same query run from the workstation took 20 seconds to complete.
I am facing a problem in writing the stored procedure for multiple search criteria.
I am trying to write the query in the Procedure as follows
Select * from Car where Price=@Price1 or Price=@price2 or Price=@price=3 and where Manufacture=@Manufacture1 or Manufacture=@Manufacture2 or Manufacture=@Manufacture3 and where Model=@Model1 or Model=@Model2 or Model=@Model3 and where City=@City1 or City=@City2 or City=@City3
I am Not sure of the query but am trying to get the list of cars that are to be filtered based on the user input.
I am trying to create a report using Reporting Services.
My problem right now is that the way the table is constructed, I am trying to pull 3 seperate values i.e. One is the number of Hours, One is the type of work, and the 3rd is the Grade, out of one column and place them in 3 seperate columns in the report.
I can currently get one value but how to get the information I need to be able to use in my reports.
So far what I've been working with SQL Reporting Services 2005 I love it and have made several reports, but this one has got me stumped.
Any help would be appreciated.
Thanks.
I might not have made my problem quite clear enough. My table has one column labeled value. The value in that table is linked through an ID field to another table where the ID's are broken down to one ID =Number of Hours, One ID = Grade and One ID= type of work.
What I'm trying to do is when using these ID's and seperate the value related to those ID's into 3 seperate columns in a query for using in Reporting Services to create the report
As you can see, I'm attempting to change the name of the same column 3 times to reflect the correct information and then link them all to the person, where one person might have several entries in the other fields.
As you can see I can change the names individually in queries and pull the information seperately, it's when roll them altogether is where I'm running into my problem
Thanks for the suggestions that were made, I apoligize for not making the problem clearer.
Here is a copy of what I'm attempting to accomplish. I didn't have it with me last night when posting.
--Pulls the Service Opportunity
SELECT cs.value AS "Service Opportunity"
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid = cs.attributeid
WHERE ca.name = 'Service Opportunity'
--Pulls the Number of Hours
SELECT cs.value AS 'Number of Hours'
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid =cs.attributeid
WHERE ca.name ='Num of Hours'
--Pulls the Person Grade Level
SELECT cs.value AS 'Grade'
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid =cs.attributeid
WHERE ca.name ='Grade'
--Pulls the Person Number, First and Last Name and Grade Level
SELECT s.personnumber, s.lastname, s.firstname, cs.value as "Grade"
FROM student s
INNER JOIN cperson cs ON cs.personid = s.personid
INNER JOIN cattribute ca ON ca.attributeid = cs.attributeid
WHERE cs.value =(SELECT cs.value AS 'Grade'
WHERE ca.attributeid = cs.attributeid AND ca.name='Grade')
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
I am in the process of creating a Report, and in this, i need ONLY the row groups (Parents and Child).I have a Parent group field called "Dept", and its corresponding field is MacID.I cannot create a child group or Column group (because that's not what i want).I am then inserting rows below MacID, and then i toggle the other rows to MacID and MacID to Dept.
I'm trying to create an email report which gives a result of multiple results from multiple databases in a table format bt I'm trying to find out if there is a simple format I can use.Here is what I've done so far but I'm having troble getting into html and also with the database column:
I'm trying to get some XML data into SQL Server but i ran into problem when inserting the data (multiple orders with multiple order details) using a single sproc. Is it possible, or do I have to do in some other way? :confused:
I simplified my example to this: ----------------------------- --CREATE PROCEDURE sp_InsertOrders AS
DECLARE @docHandle INT, @xmlDoc VARCHAR(4000), @orderID INT
--DROP TABLE #Orders CREATE TABLE #Orders ( OrderId SMALLINT IDENTITY(1,1), FkCustomerID SMALLINT NOT NULL, OrderDate DATETIME NOT NULL )
--DROP TABLE #OrderDetails CREATE TABLE #OrderDetails ( OrderDetailsId SMALLINT IDENTITY(1,1), FkOrderID SMALLINT NOT NULL, ProductID SMALLINT NOT NULL, UnitPrice SMALLINT NOT NULL )
INSERT INTO #Orders (FkCustomerID, OrderDate) SELECT CustomerID, OrderDate FROM OpenXML(@docHandle, 'Orders/Order', 3) WITH ( CustomerID INTEGER, OrderDate DATETIME )
SET @OrderID = @@IDENTITY;
--INSERT INTO #OrderDetails (@OrderID, ProductID, UnitPrice) SELECT @OrderID AS OrderID, ProductID, UnitPrice FROM OpenXML(@docHandle, 'Orders/Order/OrderDetails', 3) WITH ( ProductID INTEGER, UnitPrice INTEGER ) -----------------------------
All orders are inserted first which makes the use of @@IDENTITY incorrect (it works fine if you insert a single order with multiple order details). Since it was quite some time since I last worked with SQL I am not sure if am doing it the right way... :confused: :confused: Anybody out there who knows how to solve the problem?
I concatenate multiple rows from one table in multiple columns like this:
--Create Table CREATE TABLE [Person].[Person_1]( [BusinessEntityID] [int] NOT NULL, [PersonType] [nchar](2) NOT NULL, [FirstName] [varchar](100) NOT NULL, CONSTRAINT [PK_Person_BusinessEntityID_1] PRIMARY KEY CLUSTERED
[Code] ....
This works very well, but I want to concatenate more rows with different [PersonType]-Values in different columns and I don't like the overhead, of using the same table in every subquery ([Person_1]). Is there a more elegant way to do this, without using a temp table or something else?
I need to update multiple columns in a table with multiple condition.
For example, this is my Query
update Table1 set weight= d.weight, stateweight=d.stateweight, overallweight=d.overallweight from (select * from table2)d where table1.state=d.state and table1.month=d.month and table1.year=d.year
If table matches all the three column (State,month,year), it should update only weight column and if it matches(state ,year) it should update only the stateweight column and if it matches(year) it should update only the overallweight column
I can't write an update query for each condition separately because its a huge select
I'm trying to create a database that takes specific information from a number of databases on different servers to make some reporting that we have much easier.
I'm pretty new to SQL so I'm not sure of the best way to proceed. I read an article that suggested I use the OPENROWSET command. The problem is, the version of SQL that came with one of the programmes we use is limited and will not allow you to turn on the allow "Ad Hoc distributed Queries" so the SLQ statement will not execute.
I'm confused why it won't let me to connect through ODBC as I've created a web page that selects data from this database with no problems!
Here is the SQL statement that I've written to make sure it is the correct one (on the msdn library page it said that this was the ODBC connection):
SELECT a.* FROM OPENROWSET('MSDASQL','DRIVER=(SQL Server);SERVER=APPOLOACT7;UID=sa;PWD=***************', 'SELECT * FROM MDCTestAndDev.dbo.TBL_CONTACT') AS a
I've also created the ODBC connection using the tool on Administration Tools>Data Sources ODBC
Any help would be greatly appreciated (also any ways of selecting from one database and inserting it into another will be helpfull)
After hitting limitations in the SQL CLR world that bar us from invoking COM objects we are forced to use windows services to read the messages off the Service Broker Queues. Unfortunately we loose the auto activation feature in the Queues, but we can still read messages and perform the SQL work under one transaction.
We are going to attempt to take N messages simultaneously from the Queue, though N instances of a windows service. If the messages send to the queue are one message per conversation, will we be able to achieve having N readers take messages off simultaneounsly?
Thank you very much,
Lubomir
P.S. if anyone has a better approach to obtaining the message in "out of sql code" or invoking external (not assemblies stores in SQL server) code libraries, that would be etremely nice to hear. I have thought about invoking a web service through CLR, but that is probably too much overhead - MSMQ seems much more appealing than a web service;