Some Strange Behavior With Parallelism
Jul 7, 2006
Hi,
In my package I have a source, a script component to make some changes to that and a destination. To speed up the process, within a data flow, I have created 6 copies of the above components and running them in parallel. Each source takes different set of data. I have divided the data using the record no such that, each set will read 1million records.
Now, my question is, though each pipleline is supposed to process exactly 1million records, they are not running at the same speed. For example, 1 pipeline completes processing all 1million records whereas another pipeline processed only 250000 records in that time. I don't see any reason for why one should run slow while another is running fast considering that both are doing the same thing?
Do you have any idea about this?
Thanks.
View 6 Replies
ADVERTISEMENT
Feb 1, 2008
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
View 1 Replies
View Related
Nov 29, 2005
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet.
Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ?
-----
HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5
Windows 2000 Advanced Server - SQL Server 2000 SP4
View 1 Replies
View Related
Jul 23, 2005
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
View 4 Replies
View Related
Nov 1, 2007
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks
DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
foreach (DataRow drServer in dtSQLServers.Rows)
{
String ServerName;
ServerName = drServer["Server"].ToString();
if (drServer["Instance;] != null && drServer["Instance"].ToString().Length > 0)
ServerName += " + drServer["Instance"].ToString();
if (cmbServer.Items.IndexOf(ServerName) < 0)
cmbServer.Items.Add(ServerName);
}
View 3 Replies
View Related
Mar 5, 2008
I have 2 packages that for ease I'll call Parent & Child. The Parent package calls the Child package as the 4th step in the process. Once the Child has completed, the Parent has a few more imports that it does.
The Portfolio table is loaded in the Child package which is step 4 in the Parent package. Then in step 5 a few tasks utilize that Portfolio data for lookups.
The strange part is that there are probably 4 or 5 data tasks that do lookups against the Portfolio data in Step 5 (step 5 is a container). All but 2 of the data tasks retreive data from the Portfolio data. The other 2 don't find any data and just move on. Once the package stops, if I simply execute those tasks they run and load the data correctly.
It seems to me to be a caching or an isolation problem but I can't find a solution.
Any ideas?
View 7 Replies
View Related
Aug 2, 2005
Hi all,
I face a problem as follows: We have an application runnig on SS2K.We log every delete of
documents(from Archive table) in another table.Now it seems some of the rows have deleted strangely
without any delete log by our application.We assumed there is somebody who has direct access to
database and delete them manually(obviousely our app does not generate any log in this situation)But
there is no people.We check that with admins many times.
Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think
our app flaws somewhere?
Thanks a lot for your attention.
View 1 Replies
View Related
Aug 2, 2005
Hi all,
I face a problem as follows: We have an application runnig on SS2K.We log every delete of documents(from Archive table) in another table.
Now it seems some of the rows have deleted strangely without any delete log by our application.We assumed there is somebody who has direct access to database and delete them manually(obviousely our app does not generate any log in this situation)But there is no people.We check that with admins many times.
Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think our app flaws somewhere?
Thanks a lot for your attention.
View 2 Replies
View Related
Oct 5, 2004
Hi,
We are noticing some strange behavior with MSSQL. I was hoping somebody can shed some light.
Since the past few days in our production database we have been getting the following error
Could not allocate space for object 'Person' in database 'PROD' because the 'PRIMARY' filegroup is full...
Some data on your system
The PRIMARY filegroup is 20G in size. And 80% of it is free. Also, the Primary filegroup is setup to auto grow and there is about 20G free space at the OS level. So, I don't think it has anything to do with the filegroup.
I started doing some research on the 'person' object (table), run sp_spaceused etc... to get some data. On a trail and error basics I run DBCC INDEXDEFRAG on the 'person' table and the error went away.
Questions
1. Why is the error misleading? Why does it say, the 'PRIMARY' filegroup is full?
2. Why am I getting this error and why does running DBCC INDEXDEFRAG fix the problem?
3. I can understand the index being fragmented and needing a defrag, but can MSSQL server actually fail with this error if the index is fragmented too much?
4. What data can I look at and prevent this from happening in the future?
Any other data will be much appreciated.
Thanks so much.
View 4 Replies
View Related
Mar 22, 2006
Running SQL 2000 SP4 on Windows 2000 Server.
When a SELECT query is executed in Query Analyzer results are displayed in the results pane, fine...when an "ORDER BY" clause is added to the select stmt the query runs for apprx. 20 seconds then displays "TempDb log is full [Error 9002, Severity 17]". (The tempdb is set to autogrow/10%/unrestricted and plenty of storage space) The next time the query is executed after getting the "tempdb log is full" error, the server reboots upon query execution. As soon as F5 or ctrl-e is pressed to execute the query the server does a hard crash - black screen then reboot...no warnings, no event viewer log, no sql log warnings/errors, no drwtsn log, no hardware log message errors...nothing.
Re-applied Windows 2000 SP3 and SQL SP4 to server, same behavior.
View 4 Replies
View Related
Aug 17, 2007
Hi folks,
I have some code, that just works. But when I put it into a exec() I get a strange error. First the code
exec ('
select
year,quarter, min(price) as minimum
into #temptable from
(
select
ntile(4) over (partition by year,quarter order by price) as rang
,year
,quarter
,price
from
(
select distinct id,year,quarter,price from #tbl1
) as a
) as b
group by rang,year,quarter
Select year ,quarter,
(
SELECT CAST(minimum as varchar(max)) + ","
FROM #temptable t2
where t1.year=t2.year AND t1.quarter=t2.quarter
FOR XML PATH("")
)
from #temptable t1
group by year,quarter
')
SQL Server says, that Insert Into is missing a column name. It points at line with FOR XML PATH("").
Any Idea what's wrong here?
The Output without exec (and correct quotes) looks like:
200430.00,252.90,331.40,470.00,
200440.00,241.00,325.00,450.00,
20051102.00,242.90,326.37,448.00,
200520.00,253.00,340.00,480.00,
200530.00,250.00,325.00,465.00,
2005443.00,260.00,355.00,490.00,
Thank you
View 3 Replies
View Related
Oct 10, 2007
I created a very simple SSIS package (it just updates a single row in a table). When I execute the package from the command line (using dtexec), it takes about a second to finish, as expected. But when I execute it using dtexec via xp_cmdshell, it takes about 91 seconds. When I use a SQL job to execute the package as an operating system type, it takes 91 seconds. Using a SQL job to execute it as a SSIS package takes again 91 seconds. It appears that something is causing a delay of about 90 seconds before the package actually gets executed. I tried changing the SSIS service account, but that didn't change anything. Why is executing the package through SS2005 different than executing it directly from the command prompt?
View 4 Replies
View Related
Jun 1, 2004
I have a trigger on each table in a database which updates a datetime column (lastupdatedon) and a varchar field (enteredby) after update on each individual table. The problem is, when one table is updated at the same instant as another table (by different users), the same varchar data (SYSTEM_USER) is put in both tables, even though the users are different.
Here is an example of the trigger:
CREATE TRIGGER EventUpdate ON jrowley.Event
AFTER UPDATE AS
UPDATE jrowley.Event SET LastChangedOn = getdate(),EnteredBy = SYSTEM_USER WHERE EventID in (Select EventID from deleted)
Any suggestions are welcome.
Thanks,
Jerry
View 2 Replies
View Related
May 1, 2008
Ok maybe someone smarter than me (not difficult) can help me out :)
Two queries:
#1:
select
a.load_id,
b.attribute_name,
a.attribute_loc,
b.attribute_loc
from
PCI_Template_NR_Map b
left outer join
PCI_Master a
on a.attribute_name=b.attribute_name and
a.load_id in (select distinct top 53 load_id from PCI_Load')
#2:
select
a.load_id,
b.attribute_name,
a.attribute_loc,
b.attribute_loc
from
PCI_Template_NR_Map b
left outer join
PCI_Master a
on a.attribute_name=b.attribute_name and
a.load_id in (select distinct top 54 load_id from PCI_Load')
#1 Produces a correct left outer join, any values in PCI_Template_NR_Map that are not in PCI_Master show null. This works for any number of load_id values in the subselect up to 53.
#2 Is the exact same query, except I am no longer limiting it to 53, when i get to 54 (or if I take away the top altogether) it returns rows as if it were a normal inner join, instead of a left outer join (i.e. it only shows rows that are a match between PCI_Master and PCI_Template_NR_Map).
Can anyone explain to me what is happening here, and how to get around this issue? I need to be able to filter this for as many load_ids as I need (usually aobut 200). Thanks in advance,
Brad
View 20 Replies
View Related
Apr 25, 2008
I have a fairly complex report. I have a couple of sub reports on the left hand side, a table in the middle and a couple of rectangles on the right side of the screen. When I try to add another sub report, even though I make it about a one inch square, it pushes the rectangles three or four inches further right when I display it.
Any way to reign in an errant sub report?
Thanks.
View 4 Replies
View Related
Jan 20, 2007
I have a data flow two lookups components (call them lookup1 and lookup2). They both query the same relational table but with different values. Each has a single row result set containing one column and the each of the two columns is mapped to a corresponding package-level variable. The original data flow sequence had lookup1 executing after lookup 2. Each component redirects errors to a separate text file.
Lookup1 succeeds but lookup2 fails on every row which populates its error text file; however I can construct a sql query from the lookup2 values that returns the expected result.
If I reverse the sequence of components (lookup2 followed by lookup1) lookup2 still fails on every row. Whenever both lookups are present in the dataflow, lookup2 fails for every row and all its rows are redirected to the error text file
Now this is where it gets interesting. If I omit either lookup1 or lookup 2 from the data flow, it works. If the data flow contains lookup1 only the destination is populated. If the dataflow contains only lookup2 then no errors are written to the error text file, all lookups succeed, and the destination is populated.
I'm stumped. Is it possible that both lookups selecting from the same table could cause a problem? Each works independently, but when both are together in the data flow, lookup2 fails for every row. I've been over the configuration and code a dozen times and am positive there are no errors; besides, lookup2 runs fine if lookup1 is excluded from the data flow.
View 11 Replies
View Related
Jun 23, 2000
I have a stored proc with the following two lines of code:
Select @SumCredits = (Select Sum(CreditAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
Select @SumDebits = (Select Sum(DebitAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
If I execute this stored proc via Query Analyzer, it will take about 11 seconds. If I execute the above two SQL statements indiviudally within Query Analyzer, each takes less than a second (the entire stored proc should take about a second). This hasn't always been happening. Just recently this behavior started occuring - after we imported a large amount of data into our database. However, I don't know if the two events are related.
Has anyone ever noticed this type of thing?
View 1 Replies
View Related
Jan 18, 2007
Try a little experiment. Partly to humor me, and make me believe I am not quite insane.
Step 1: Install SQL server 2000, such that the data files are not in the default location, but in a location with a shorter path (i.e. install the data files to E:MSSQL8).
Step 2: Run the following queries, and comment on any oddities:
select filename
from master..sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename)), filename
from sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename))
from sysaltfiles
where dbid = 2
I am guessing that #2 is some sort of odd effect caused by the fixed length data field, but I just want to make sure that other people get this oddity, and not just me. I have no idea what could be causing the third output...or perhaps the lack of it.
View 2 Replies
View Related
Jul 23, 2005
Hello,I developed a win32 .exe CGI that connects to a clustered SQLServer toreport some data.The software is written with Borland C++Builder.This is the oledb string:Name=Provider=SQLOLEDB;Password=xxx;Persist Security Info=True;UserID=xxxx;Data Source=xxxxx;Initial Catalog=xxxxx;NetworkLibrary=dbmssocnIt suddendly stopped working on my customer network, so I made sometest and I verivied that the problem is on the connection withSQLServer: my test program just opens a connection, closes it andexits, reporting in a log file if the open was successful or failed.If I run the program locally, just launching it, no problem, it works.I can run it mutilple time continuosly and is connects every time.If I run the program through my webserver, as a CGI that's how it issupposed to work (http:\localhostscriptsconnect.exe), it connectsthe first time, and then I have to wait 40 seconds to connect againsuccessfully, or it fails.If I try against MY sqlserver on my laptop or on my network no 40 secproblems, but on my customer network, with THEIR SQLServer , if I tryto connect from their webserver, or from my laptop webserver, I havethis 40sec problem.I analyzed the network traffic, and I discovered that when I run mytest program locally it originates only TCP/IP packets, and SQLServeranswers only with TCP/IP.But when I use it from the webserver as a CGI, it originates an UDPpacket, then SQL answers with another UDP packet, and then theycommunicate over TCP/IP.This when it works: the second time my program continues to send theUDP packet, but it receives no answer, and fails the communication.I can only say that:- we haven't touched the program for months, and it really stoppedworking suddendly, so I suspect that something in my customer netowrkhas changed- I tried many different OLEDB strings, disabling connection poolingand all the services, calling the SQLServer by name or IP...- the problem can't be related to my program, because now is reallyjust an oledb connection testAnyone have an idea?Thank you very much,Mattia
View 1 Replies
View Related
Apr 25, 2008
Has anyone expirenced this? I'm upgrading my SSRS 2000 SP2 instance to 2005 SP2 and I have to "upgrade" the database twice (First time I click "upgrade" in configuration it throws an error).
So, I get report manager to open. But, when I go to run a report I receive an rsInternalError exception. I look in the log and see that a few columns are marked as NOT NULL and the system is trying to push null values into the columns. The colums are:
SnapshotDataID
IsPermianentSnapshot,
HasInteractivity
I also get serious blocking on the key on chunkData table in the tempDB as well. this was a serious issue that I had a ticket open with Microsoft about. As I am finding is typical with the helpdesk with SSRS I had to fix it myself by adding locking hints on the CreateChunkAndGetPointer sp. Changing the last line to:
Code Snippet
SELECT @ChunkPointer = TEXTPTR(Content)
FROM [ReportServerTempDB].dbo.ChunkData AS CH with (nolock)
WHERE CH.ChunkID = @ChunkID
stopped the blocking issues that I was having.
Now, my question is this, is this normal or do I have a hosed installation of SSRS 2000 that I should just take the time to scrap and start new? Should I have to do this just to get reporting services working after upgrading?
PS - I found a bug in the upgrade script that is created for running later. if ReportServer is not the name of the database, the script will still use ReportServerTempDB in on of the stored procedures that is created.
Any input on this would be GREWATLY appreciated. Any input from MS would be better!
Thanks
Scott
View 1 Replies
View Related
Dec 1, 2006
Hi to all!
i hope that You could help me, because i must build a release version of my SSIS package in 2 hours..
This is my big issue:
i've a SQL Server 2005 (SP1 64bit version) and a remote SQL Server 2000 (SP4)..
I've created a Data flow task that contains an OLEDB source (native client on SQL 2005 local server) and a Lookup Transformation that has to connect to remote SQL Server 2000 and returns me some columns..
I see all, tables, columns.. no problem at first sight.. I choose some columns to use in Lookup output and confirm..
When i re-open my lookup task, columns are not selected (checked)..
obviously my oledb destination can't see that cols..
I choose, but my SSIS don't store any cols in output..
I've tried using error output, advanced editor, creating another package with sql 2005 tables only.. but no change.. the same silent error.. i cant use the columns for output to add to my sets..
can you help me plz??
i extremely need you a lot!!!
thanx in advance..
View 3 Replies
View Related
Jan 25, 2007
Hi, I came upon a strange behavior of Format function in report, which I'm unable to explain.
I have some double value, which I want to format. E.g. the value is in db field "Users" and is 303870
1)
If I set a Value property for the field to
=Fields!Users.Value or CDec(=Fields!Users.Value)
//the value is already double so the CDec has no effect
and Format property for the field to
=Format(me.Value,"#,#") or =Format(Fields!Users.Value,"#,#")
than the number which is visible in the report is 3303,873870 ???
2)
If I put in the Value property
=Format(Fields!Users.Value,"#,#")
and the Format property is empty
than the output is ok 303,870, however this is not desired, because the value is than handled as string
3)
If I put in the Value property
=Fields!Users.Value
and into Format property
=Format(CStr(me.Value),"#,#")
than the output is ok again.
As I understand, the Format should be same as in VB (I am actually C# programmer, not very familiar with VB) so I tried to use Format in VB on double value, and the output was as expected (e.g. 303,870), but when I used to a string I got only the style (e.g. #,#).
So I wonder, the ReportingServices Format function works correctly only with string input? But why than works the example 2)? Or do I have somewhere a mistake?
Thanks for the advice.
View 3 Replies
View Related
Oct 1, 2004
I have two SQL Server Instances on two servers. One server is my webserver and database server and the other one is just a database server. i have an application that calls a stored procedure located on the webserver/database server that runs a query on the OTHER database server. I use linked tables in my first instance to make the call possible.
Everything was working just fine for months until the database server was restarted and the IP address was changed. The name of the database is the same however and my first SQL Server instance has no problems running queries on the other databases tables. However, when you try to run the application i get the following error:
Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection
I have mixed mode authentication selected and my security uses the security context with username=sa and password=sa.
So here's the weird part.
The application will only run correctly when i manually run a SQL command from my webserver's SQL Analyzer on the linked SQL Server. however, after a few minutes, the same error comes back!! so as a temporary fix, i scheduled a dts job to run a simple query on the linked server every two minutes, so the application keeps working! It's almost as if the webserver's sql server forgot that the linked server is there, and by running a simple query in query analyer, the connection gets refreshed and everythings normal again - for about 3 minutes!
I am completely stumped by whats happeneing and appreciate any help. Thank you.
View 3 Replies
View Related
Jan 21, 2008
I'm a wee bit of a newbie concerning DTS and have inherited a db with a DTS containing a Copy SQL Server Objects task set to run nightly. Essentially, it does an informal backup of some core data.
Recently, I was notified that one of the tables it copies over is now empty on the Destination db. The DTS shows that it runs successfully with no errors logged, the table in question IS selected to be copied from the Source database, there IS data in the Source database table, and every other table in the Destination database is populated appropriately.
Any ideas on what would cause this one table to be empty without generating any errors?
FYI, running SQL Server 2000.
View 1 Replies
View Related
Jul 20, 2005
I'm using Access as the front end to SQL2000.I have a table of contacts. UniqueID is the PK.There's also a column named "CreatedBy" int and a column Categorysmallint. They're all set to no nulls. If I create an index onCategory, an SP that searches the CreatedBy column fails and I get anerror message: 2757 There was a problem accessing a property or methodof the OLE object.If I delete that column from the index then the opposite happens withthe same error. I have even deleted the columns, saved the table, andrecreated the columns. This is an aggravating one!
View 1 Replies
View Related
Apr 20, 2006
I know you can change the max degree parallelism server wide, but can you do it on the fly for one query? I know... trust the query processor but when I turn it off for this one sp, my query goes from 3 seconds to 0 and I got this ex-MS guy in here telling me there is a way, but he does not remember how.
I want him to simplify the sp or have his project's DBA do it, and I even offered to take a hack but.... you know.
View 2 Replies
View Related
Oct 29, 2001
Does anyone know about sqlserver's Parallelism.
a query without parallelism takes much less time as the one with parallelism, in my case it's 6 times faster without parallelism. If that's the true.
What do we need parallelism for?
Any ideas
Thanks
View 2 Replies
View Related
Jul 23, 2005
I have a function that returns a table of information aboutresidential properties. The main input is a property type anda location in grid coordinates. Because I want to get only acertain number of properties, ordered by distance from thelocation, I get the properties from a cursor ordered by distance,and stop when the number is reached. (Not really possible todetermine the distance analytically in advance.) The cursor alsoinvolves joins to a table of grid coordinates vs. postcodes (theproperties are identified mainly by postcode), and to a tablethat maps the input property type into what types to search for.Opening the cursor typically results in the creation of six toeight parallel threads, and takes approx 1 second, which is abouthalf of the total time for the function.Recently the main property table grew from 4 million to 6.5million records, and suddenly the parallelism is lost. Takingthe identical code and executing it as a script gives parallelism.Turning it into a SP that inserts into a #temp table and thenselects * from that table as the last statement also givesparallelism. But when it's in the form of a function, there isonly one thread -- and the execution time has gone from ~2 secto ~8 sec. I updated the statistics on the table, but stillno parallelism.I could turn it into a SP easily enough, but that would involvea change to the C++ program that calls it, which takes a whileto get through the pipeline. In the meantime, is there some wayto induce the optimizer to use parallelism? It used to.
View 3 Replies
View Related
Dec 15, 2006
hi,i've set 'max degree of parallelism' to 1 because some sql request hanged.Now when i connect, how can i set the parallelism to 4 for a session.Is there a command like this :'alter session set max degree of parallelism 4' ?ThanksPaul
View 6 Replies
View Related
Jul 20, 2005
If SQL Server is designed for multi processor systems, how can runninga query in parallel make such a dramatic difference to performance ?We have a reasonably simple query which brings in data from a few nonecomplex views. If we run it on our 2x2.4Ghz Xeon server it takes 6minutes plus to run. If we run this on the same server withOPTION(MAXDOP 1) at the end of the same query it takes less than asecond.Examining the execution plan, the only difference I have been able tosee is that parallelism is taking up 96% of the run time when usingtwo processors. This drops when using the one so a sort takes up thevast majority of the time for the query to run.OK, so running in parallel should mean that it's run in various partsand then 'joined up' later for performance gains, but how can it getit so wrong (timewise) ?If this is the case, will I see a significant difference changing ourserver to use a single processor, which seems completely the wrongapproach (or should I do this on each query in each app - eek) ?Do we have a problem that we don't know about that causes it to takethis long ?What can we do ? Ideally, using both processors would seem to bepreferrable.
View 2 Replies
View Related
Oct 16, 2006
Hi,
I would just like to confirm something with you guys...
Am I correct in saying that you dont need multiple connections to the same DB in a SSIS package in order to achieve parallel processing across multiple SQL tasks. In other words, I have 2 SQL tasks executing different stored procedures on the same DB that I want to run in parallel. They should be able to share one connection and still process in parallel, correct?
With that in mind, would the processing be faster if they each had their own connection?
Thanks in advance.
View 1 Replies
View Related
Mar 16, 2007
after running query at first time working all processes
but later 2-3 sec. working only one
SQL 2005
Hewlett Packard DL580 (16 processes)
What is ideas?
View 6 Replies
View Related
Sep 2, 2015
I have SQL Server Version:-
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64) Jun 28 2012 08:36:30 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)
This is just an UAT server which has OS and hardware detail below:-
OS :- Windows Server 2008 R2 Standard
SP:- SP1
Processor :- Intel(R) Xeon(R) CPU X5650 @2.67GHz 2.66 GHz
RAM : - 4 GB
Bit - 64 bit
I want to set the value to max degree of parallelism, what value should i configure for the same?
Below is the snap property of SQL instance >> Processor
View 3 Replies
View Related