SQL 2012 :: Could Not Open Global Shared Memory To Communicate With Performance DLL
Apr 22, 2014
Getting the following warning in SSIS - SQL 2012:
[SSIS.Pipeline] Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
I am getting the following warning for my SSIS08 package: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. I did check Warning in SSIS 2008 , but didn't find any solution. The package processes data and executes fine , but why do I see this warning? When I run this package on my machine, I see no such warning, it's only when I deploy it to our DEV SSIS server, I get this warning.
Apparently this error was fixed in CU12 for SQL 2008, but it seems to have raised it's head again in SQL 2012.[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
I've got a client who is seeing it. but I've not seen a fix in CU1 or CU2 for 2012.
I'm busy rewriting DTS packages as SSIS packages. As and when I finish a package I run it in debug mode via Microsoft Visual Studio and then examine the Exection Results to see the messages generated.
Now it may or may not matter how I run the package but the following warning has been generated :-
[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
One of my production SQL Server 2000 systems is listening on TCP and Named Pipes, but not on Shared Memory.
This server has a lot of scheduled jobs that are internal to this box. I assume these jobs would benefit from using shared memory instead of TCP/IP, but I can't figure out why it doesn't use shared memory already and how to correct that.
A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.).Net SqlClient Data Provider at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadPacket(Int32 bytesExpected) at System.Data.SqlClient.TdsParserStateObject.ReadBuffer() at System.Data.SqlClient.TdsParserStateObject.ReadByteArray(Byte[] buff, Int32 offset, Int32 len) at System.Data.SqlClient.TdsParserStateObject.ReadUInt32() at System.Data.SqlClient.TdsParser.ReadSqlValueInternal(SqlBuffer value, Byte tdsType, Int32 typeId, Int32 length, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.ReadSqlValue(SqlBuffer value, SqlMetaDataPriv md, Int32 length, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ReadColumnData() at System.Data.SqlClient.SqlDataReader.ReadColumnHeader(Int32 i) at System.Data.SqlClient.SqlDataReader.ReadColumn(Int32 i, Boolean setTimeout) at System.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
Ive just started getting this on a stable application thats used a datareader on millions of records.
Not sure where to got from here and I can't find anyone else whos getting the failure during the processing.
I could disable shared memory protocol but that seems extreme. I'm on Sql Enterprise 9.00.2047. Maybe the process is hammering the server very hard? Personally I've rarely ever seen SQL be the cause of an error, only user config, bad disks or power issues.
I'm running the app again with SQL Profiler capturing "standard" events.
Just need it to blow up again.
I can run the app on another machine of course and I wouldn't get Shared Memory Provider being used. Maybe I ought to do that as well. At least if the error is not really in the Shared Memory I'd have another avenue to explore.
I am new here and new to SQL Express. I've searched for my issue, but can 't quite find anything close to the problem or how to solve it, if it's even solvable. I am using SQL Express on a pc to connect to the back end of a database. The front end application (an access runtime) also runs on the same pc. This pc is on a domain. I think I've tried every combination of protocols, and although connectivity via ODBC is successful, the application can't connect - gives the "server doesn't exist or access denied". When I log on to this computer with the "machine" logon (not the domain), I have SQL Express configured to use shared memory, the application runs just fine. I need to use this database for testing in a non productivity environment, but I really hate to log off the domain to run it. Ideas?
Our 32-bit applications connect to SQL Server 32-bit through OLEDB with Shared memory as preferred protocol. Our client applications and SQL Server generally reside on same machine. We are evaluating possible impact when SQL Server 2008 64-bit is accessed with our 32-bit client applications running on 64bit WindowsServer 2008. Can shared memory protocol will be still used by underlying SQL server OLEDB dll considering the client application is 32-bit where as SQL Server is 64-bit ? Or it will switch to Named pipes or TCP/IP automatically ?
When I try to install MsSQL Server 2005 Develop Edition do I get the error:
[Microsoft][SQL Native Client]Shared Memory Provider: No process is on the other end of the pipe.
I have trying to look at other posts on this forum and elsewhere, but cant find any solution that works for me - mainly cuz all solutions is after the installing.
Before trying to install MsSQL Server 2005 Dev did I install VS.Net 2005 Pro. First did the Native Client make troubles, but got it to work with reinstalling it, but now does the SQL setup stop on every try with the error above.
I have tried looking if the MSSQLServer is running when it tries to connect during install, and everything says it is running (Services, Net start, Taskman.).
I dont run any special setup on my system - it is a normal Windows XP Pro SP2 with all updates. I just need the SQL server installed so I can develop locally without access to out main SQL server.
I have been using MsSQL 2000 before and never had any problems, but the 2005 keep on bugging me.
The only solution I havent tried is to reinstall Windows itself, but I will pref. not to do so.
And to be honest, then have I no idea what a "pipe" is - I am used to develop webapplications and not so much on server maintaince/troubleshooting.
When running the etl I'm getting the error: <SSIS Task>: Shared Memory Provider: Timeout error [258] ; followed by the message "Communication link failure".
What is special about this message that it happens on a SQL Execute task (random task) and the Timeout is after 2 minutes.
When executing the packages separatly it is working fine. The SQL Tasks that are failing are also quit heavy, but reasonable and takes between >2min and 10 - 15 min. Statements are stored procedures that puts an index on 3 mil. records or update statements,...
I had a look to all my (SSIS-etl) timeouts and they have the default value 0, the "remote query timeout" of the server is set to 10 minutes. According to me, these are the only one that exists?
There are 2instances on the server each instance has 24GB allocated, the server has 64 in total. Also when the etl runs (that results in an error) no other etl is running on the 2 instances. I'm working with the oledb sql server native client11.0 provider : SQLNCLI11.1.
I did a load testing and found the following observations:
1. The Memory:Pages/sec was crossing the limit beyond 20.
2. The Target Server Memory was always greater than Total Server Memory
Seeing the above data it seems to be memory pressure. But I found that AvailableMemory was always above 200 MB. Also Buffer Cache HitRatio was close to 99.99. What could be the reason for the above behavior?
I am receiving the following error when starting a program called ShelbySystems that is supposed to connect to a local database. I don't think this is a security issue but I don't know much about SQL server either so...
DIAG [08001] [Microsoft][ODBC SQL Server Driver][Shared Memory]SQL Server does not exist or access denied. (17) DIAG [01000] [Microsoft][ODBC SQL Server Driver][Shared Memory]ConnectionOpen (Connect()). (2)
System Info: Windows 10 Home - upgrade from 8 64 bit SQL server 2012 Express SQL Backwards compatibility 2005 64 bit ShelbySystems software v5.4
I am including the trace log in case it is useful.
I am getting the following error when i try to connect to the my web site using froma different server. A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) i am using sql express and i attach the database through the connection string in the web config. Any ideas
We are going to install a SQL Server 2012 Ent. Edition two node (Active/Passive) cluster. Only one instance. Issue is, a separate shared storage is not provision for MSDTC.
1. Is it mandatory to configure MSDTC for a single SQL 2012 instance ?
2. Can we use one of the shared drives (Data/log/bkp/temp) for configuring MSDTC ?
I need to add an existing shared folder to a SQL FileTable. So this is the path and I created a SQL Filetable and now I need to add it to the filetable.
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online: • This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician. • The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
I want to create and drop the global temporary table in same statement.
I am using below command but I am getting below error
Msg 2714, Level 16, State 6, Line 11
There is already an object named '##Staging_Temp' in the database.
if object_id('Datastaging..##Staging_Temp') is not null begin drop table ##Staging_Temp end CREATE TABLE ##Staging_Temp( [COL001] [varchar](4000) NULL,
Just two questions actually. We have built a ColdFusion based forums package. Currently we have it on two beta sites, and we are usin sql 7 for the db. Firstly the forums are serving 200-300 people at any given time, about 16,000 unique people a day. SQL & seems to stay at around 50% cpu usage on a dual p3 with 512mb ram. Is that normal? Seems like alot of cpu usage. The other thing is it takes 500mb of ram and just about drains the server of all of its ram even though in the memory properties for the sql server its set to 255mb maximum. Any insight is appreciated.
I hope somebody here can help on my problem. I wrote a MFC application using sql ce and is running prefectly smooth on wince and PPC 2003. But when i deploy it to Windows Mobile, the database performance drop dramatically. I knows the drop on performance is due to the I/O speed on the flash memory.(The previous mobile os are using ram instead). Is there any solution or work around which I can solve this problem.
Recently i solved the performance issues on "insert" to DB by using a commit buffer(Instead of commit to the sdf instantly). But how about the "select" performance? It's too slow which take about 3 sec to select a record from the DB.
Does Microsoft provide any suggestion on the sqlce perofrmance on windows mobile?
in SQL 2005, we are having 8 CPU's and in the task manager CPU usage is showing 100% and the performance of server is very very poor and last night the server has got rebooted, still CPU Usage is showing 100%, So how can i improve the performance? it is very very urgent for me, can any one please let me know what is going on and what is the solution.
I am facing 2 problems : PROBLEM 1 : We have a few packages that run pretty fast on a desktop server with 2 Gig RAM, Dual processor (approx 4-5 hours). But the same packages run very very slow on the another server containing 8 CPU and 12 Gig RAM (ran for 24 hours without completing).
PROBLEM 2 : The CPU% ranges from 40-80% and the PF usage is stagnant at 2GB on desktop server for the same package. But in the 8CPU server, the CPU % ranges from 0-10% but the PF Usage raises from 750 MB to 8 GB.
I'm new at SQL Server, I decided to use it in order to get all the advantages of using Vb.net and Sql. My server is a SQL standard version. I'm using a relational DB most of the time for complex select queries, everytime the server is used it performs 30 or 40 queries at the time, and I have recently realized that server consumes a lot of memory after one or two days of beign up.
Let's say that if I restart the SQL Server memory usage is about 650 Mb, but after two days memory is 1.4Mb, I have used Sql Profiler and Tunning Assistant where it recommended me to create some indexes, which indeed I created them, but that did not solved the memory problem, although some queries run faster.
My questions are:
is this memory usage is normal ? if not, what should I check out to reduce memory usage?
I know the SSIS memory problem has probably been covered quite a bit, but being a newbie in using the SSIS, I'm not quite sure how to improved the performance of my SSIS package.
Basically, I've a package that loops through all the subdirectories within a specified directory, and it then loops through each file in the subdirectory and with the use of the Data Flow, process each file (according to their filenames) with a Script Component to insert data into a SQL DB.
Each subdirectory has up to 15 different csv files, but each is less than 5kB. I probably have about 100 subdirectories.
When I run the package, it functioned properly, but the package stalled (no error but just stuck in one Data Flow) after a while, and when I checked my CPU memory, it was running at 100%.
I'm not sure how I could fix it or improved the memory allocation. I was not expecting to have any memory problems as the file size is small and the number of rows of data going into and out of the Script Component is no more than 20.
I'm currently working on my first project which is using SQL express. The performance of the queries is really quick with management studio closed. But when I have it open to test queries, my program seems to take longer to connect to the server. Is there something I haven't set up right or is this to be expected when using the express edition?
A query was taking 20 seconds and consuming 70% CPU takes only 1 second after setting Maximum Memory property to 2048 MB - why?
Server: OS Microsoft(R) Windows(R) Server 2003, Enterprise Edition Version5.2.3790 Service Pack 1 Build 3790 8 GB memory Two Dual-core AMD Opteron 285 2.6GHz Processors Server is not configured for AWE Fiber channel connection to EMC Clarion - two LUNs - one for MDF, one for LDF
SQL 2005 SQL 2005 32 bit Standard Edition - SP1 (version 9.0.2047) Three instances installed on server - only one instance in use Binaries and system databases on local mirrored disk Database file (MDF) on one EMC LUN - dedicated physical drives Log file (LDF) on one EMC LUN - dedicated physical drives
Query in question:
SELECT TOP 10 Address.Address1, Address.Address2, Address.City, Address.County, Address.State, Address.ZIPCode, Address.Country, Client.Name, Quote.Deleted, Client.PrimaryContact, Client.DBA, Client.Type, Quote.Status, Quote.LOB, Client.ClientID, Quote.QuoteID, Quote.PolicyNumber, Quote.EffectiveDate, Quote.ExpirationDate, Quote.Description, Quote.Description2, Quote.DateModified, Quote.DateAccessed, Quote.CurrentPremium, Quote.TransactionDate, Quote.CreationDate, Quote.Producer FROM ((Client INNER JOIN Address ON Client.ClientID = Address.ClientID) INNER JOIN Quote ON Client.ClientID = Quote.ClientID) WHERE (Quote.Deleted = 0) AND ((Address.AddressType)='Mailing') ORDER BY Client.Name
With default maximum memory setting (2,147,483,647 MB) - query runs in 20 seconds and consumes over 70 % of the CPU.
After changing maximum memory setting to 2048 MB, query runs in less than 1 second.
Question is: What is the best practice for setting the minimum and maximum memory settings for SQL 2005? What can be monitored to identify the cause of these type of issues - using profiler, PerfMon, other tool?
I'm havin a problem with my database server in the network, i'm running a windows 2003 server standard edition with sql server 2005 standard edition.
the problem is that the server get stock and the performance of the whole network is affected, when i use the tak manager to monitor the performance i can see that the sqlservr.exe proccess is using 1,397,928 k of memory usage, in the performance monitor the graphics get crazy and the cpu usage grows up untill 85%.
Can you please let me know if there is something that i can do to normalize the server performance in order to let the network user work with the applications feeded by this server.
Can someone point me to some good articles or perhaps directly supply some words of wisdom with regard to wise utilization of variables within a T-SQL script from and standpoint of conserving memory usage and improved execution cost?
For example:
(1) Is it better to use varchars, nvarchars, etc. defined with minimal lengths to support the needs of the script or is it just as efficient to declare all with a length of say 4,000?
(2) I've seen behavior that leads me to believe that when passing a variable as a parameter in a nested procedure call, if the declared types of the parameter and the variable being passed in don't match (i.e. one is numeric(38,10) and the other is int), then implicit type conversions hurt performance. Is this true and how broadly does it apply?
(3) Does the number of variables declared in a script materially impact the performance and / or resource utlization?
(4) Is it more efficient to have a series of variable value assignments in a single SELECT statement versus a series of SET statements? Should I always perfer one to the other? Only within a looping construct?
Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.
Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.
So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.
Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.
Here's the connection string used by the application - I have tried various permutations of values to no avail:
Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.