Hi,
Sorry for placing it in SQL Server Forum.But I couldn't find a appropiate forum for my question.Can anybody help me?
My question is -
How to allocate expended memory in windows XP?
Thanks!!
Joydeep
This message is received on the client. Client pc has Intel 2Ghz or better processor, 512MB Ram, sufficient hard drive space, connecting to MS SQLServer 2000 thru TCP/IP.
Database server is running Windows 2000 Advanced Server w/ SP3 and MS SQLServer 2000 Enterprise Edition w/ SP3, with 4 way 700Mhz PIII Xeon processors and 4GB Ram (I'm not certain about network connectivity, but it's at least 100MB Ethernet). The database is approximately 87GB, with an average of 250 to 300 connections.
The application is vendor supplied, written in Visual Basic 3.0. Therefore I am using 16 bit SQL drivers, the latest I am aware of, dated 6/15/1997.
This database was previously running on SQLServer 7, and this error did not occur. It started after the upgrade to SQLServer 2000. We discovered this error in testing the upgrade and found that by decreasing the "Network Packet Size" setting on SQLServer w/ sp_configure, we were able to make the message go away. However now that we are in a production environment, the message seems to be coming back randomly. We have the NPS set to 1024 (default is 4096). I'm worried about performance if it is dropped much farther.
running Sql Server 2005 SP2 on Windows 2003 Server SP1 with 2GB RAM. After start-up, the sqlservr.exe does only take up around 100 MB of RAM, and it stays roughly there even if the DB is used heavily. This leads to very poor performance, even timeouts on simple querys.
In the task manager, I see that of the 2 GB of RAM, more than 1 GB is still available. I don't understand why SQL Server won't take it?
As a test, I configured the min and max amount of RAM SQL Server should used both to 1024 MB and restarted the service - but it is still the same picture. It won't take more than around 100 MB.
The server has just been restarted, but the problem remains.
BTW there is also an instance of SQL 2000 on the same machine. It shows the same behaviour - I even checked the "reserve phyiscal memory" checkbox there, but it stays on a very low number (50 MB) and doesn't adhere to the supposed size.
Hi, I've got a table with 65'000 records and when I do a SELECT * FROM tablename ORDER BY Name I receive this error message:
Msg 1105, Level 17, State 1 Can't allocate space for object '-513' in database 'tempdb' because the 'system' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment. Msg 1510, Level 17, State 2 Sort failed: Out of space or locks in database 'tempdb'
So I've dumped the transaction with no_log and also I've extended the segement from the master database (because tempdb is in it bydefault):
sp_extendsegment system, master
But I've still got the error message. Is there anybody who can advice me? Thank you
At the moment there is no data in the Place_No field.
I want to assign a Place_No to all records based on the number of Points (Total_Points). The highest Points value should have a place Number of 1 and so on.
However, where a number of applications have the same points I want to randomly allocate a place number for them. Application_ID 49, 96 and 155 all have 75 points so each of the 3 applications should be randomly allocated one of the following place numbers, 3, 4 and 5. I can not allocate them based on their order in the table as it has to be seen as a ‘lottery’ and each time it is run they would expect to get a different result.
The same thing then has to happen with the last 3 records in this sample allocating place numbers 6, 7 and 8
I was hoping to create a stored procedure to do this, but I’ve no idea where to begin. I would appreciate any help you could give. Thank you.
I am getting a "Could not allocate space for object 'temp_trc' in database'Test' because the 'PRIMARY' filegroup is full"The database test has unrestricted growth (All the defaults). It resides ondrive c which has 4Gigs free. I added new data and log files on drive dwhich is about 30G free. I know that my insert doesn't take even 1G diskspace.Why is the database complaining about a full filegroup when I just expandedit?J.
Could anyone please help me in fixing this error asap...
Server: Msg 1105, Level 17, State 2, Line 1 Could not allocate space for object '(SYSTEM table id: -334560816)' in database 'TEMPDB' because the 'DEFAULT' filegroup is full.
"Could not allocate new page for database 'TEMPDB'. There are no more pages available in filegroup DEFAULT. Space can be created by dropping objects, adding additional files, or allowing file growth."
I get this error when running a query on another database. But why?
Both data and transaction files on TEMPDB are set to "automatically grow file" and "unrestricted file growth" and there is 70GB of free space on the disk drive. Shouldn't the files just grow? Why would this happen?
I get the following error when doing a variety of basic queries on other databases:
"Could not allocate new page for database 'TEMPDB'. There are no more pages available in filegroup DEFAULT. Space can be created by dropping objects, adding additional files, or allowing file growth."
This doesn't make any sense since they are set to auto grow and there is plenty of disk space to do so.
Both data and transaction files of tempdb are set to: "Automatically grow file" is checked "Maximum file size" is set to "Unrestricted file growth" Growth rate of 10%
Both tempdb data file and transaction file are on D: but all drives have ample space: c:25 GB free D:69 GB free E:175 GB free
Could not allocate space for objects in database 'abc' And I have added 1 GB(1024 MB) of free space to primary file system of 'abc'. However now the primary file system of 'abc' database is 120 GB and the file properties are : Automatically grow file is checked, By percent 1 and restrict file growth: 121024 MB Still the database is showing as space avialable is 0.00, the total size is : 132186 MB
As of now I have't got any other alert, Please let me know if I get in the near future how to proceed??
One DTS package is running contunuously on this DB
Hello, I have an issue with a process that blows up because of the following error.
Executed as user: batchloader. Updated 0 existing Company records [SQLSTATE 01000] (Message 0) Inserted 0 new Company records [SQLSTATE 01000] (Message 0) Inserted 0 new EntityIdXref records [SQLSTATE 01000] (Message 0) Updated 977 existing CompanyCustomerAttr records [SQLSTATE 01000] (Message 0) Inserted 0 new CompanyCustomerAttr records [SQLSTATE 01000] (Message 0) Could not allocate new page for database 'TEMPDB'. There are no more pages available in filegroup DEFAULT. Space can be created by dropping objects, adding additional files, or allowing file growth. [SQLSTATE 42000] (Error 1101). The step failed.
Ok I am going to be typeing some really bad practices(I just started here 3 weeks ago.)
There is 23.6 gig free on my log drive. The disk is not running out of space and there are no disk errors in event viewer. The process in question calles 2 procs. These 2 proces load files from the filesystem and bulk load them into #temp tables. Then select's from these tables are issued using criteria from a static table. There are around 700000 rows being inserted into the #temp tables and no indexes are being created so very large table scans going on. Also there are some cursors being called to row by row minipulate the records and in the cursor it calles fucntions using cursors. There are thousands of files being processes everyday by several different jobs. All of the processes are written the same way. We have the tempdb set to auto grow by 10 % and the initial size is 3.5 gig. There are 3 to 4000 tables in the database and 90 % of them are being created on the fly to be used by this process and yes once again there are no indexes created on the on the fly tables. We have only one Filegroup on the server default.
I believe that takeing some of the objects and moving them to there own filegroup will help this issue. Every month we take on up to 800000 new records to process on top of what we allready do. So we use cursors cursors cursors temp objects with no indexes and massive recordsets and doing sorts on massive records sets. I am working with development to show them how and where to index but this will take time. I need a quick solution. Any thoughts any questions? The box has 4 gig of ram.
Could not allocate space for objects in database 'abc' And I have added 1 GB(1024 MB) of free space to primary file system of 'abc'. However now the primary file system of 'abc' database is 120 GB and the file properties are : Automatically grow file is checked, By percent 1 and restrict file growth: 121024 MB Still the database is showing as space avialable is 0.00, the total size is : 132186 MB
As of now I have't got any other alert, Please let me know if I get in the near future how to proceed??
One DTS package is running contunuously on this DB
do all sql data types consume whole numbers of bytes? We have an app that might be best suited to bit manipulation at the nibble rather than byte level.
We got an error "Failed Virtual Allocate Bytes: FAIL_VIRTUAL_RESERVE"
Microsoft SQL Server 2005 - 9.00.5000.00 (Intel X86) Dec 10 2010 10:56:29 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) We had 24CPU intell cores and 64 gb RAM on server. Sql service starts with enabled AWE + LPM and "-g512" option.
Why we still use such relic configuration it is not my fault. i'm investigate log and find that memorystatus "Optimization Queue" section has really huge values.
Optimization Queue Overall Memory = 1321132032Target Memory = 1158799360 Last Notification = GROW Timeout = 6 Early Termination Factor = 5Small Gateway Configured Units = 32 Available Units = 26 Acquires = 6 Waiters = 0 Threshold Factor = 250000 Threshold = 250000Medium Gateway Configured Units = 8 Available Units = 7 Acquires = 1 Waiters = 0 Threshold Factor = 12 Threshold = 16094435Big Gateway Configured Units = 1 Available Units = 1 Acquires = 0 Waiters = 0 Threshold Factor = 8 Threshold = 144849920
[code]...
What forced bPOOL to allocate pages in MTL? How to determine that queries? Same about SQL OS.As workaround i increase -g startup option to "-g2048", and "max server memory" was decreased to 54Gb.
I need to load a table with 820,000 records from a Sybase db via DTS. It always fail with the error: "Error at destination for row number 820000. Could not allocate space for object in tablespace tempdb . The default filegroup is full.".
There is only the primary filegroup defined in the db. I've increased the size from 1.5GB to 2GB, and specify that it shd grow automatically by 10% and there's no limit to the size. There is still some 28GB in the server, so it should be fine.
It still fail so I added another file to the primary filegroup with size 100MB. Again, it failed with the same error msg.
This code, when concurrently running via several threads, yields the following exception: "The connection was not closed. The connection's current state is open."
My questions are: 1. Why don't .Net allocate another connection from the pool (I try to only concurrently run 2 threads while there are 25 connections in the connection pool) ? 2. How can one explicitly allocate a connection? 3. How do you suggest to solve this problem without a mutex/monitor etc' on the 3 bold lines above and without BeginExequteNonQuery()?
The following is the report from the SQL Server Mobile Subscription wizzard, Any Ideas?
New Subscription Wizard
- Beginning Synchronization (Success)
- Synchronizing Data (100%) (Error)
Messages
A call to SQL Server Reconciler failed. Try to resynchronize. HRESULT 0x80004005 (29006)
The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation. If a republishing Subscriber has run out of identity ranges, synchronize the republishing Subscriber to obtain more identity ranges before restarting the synchronization. If a Publisher runs out of identit HRESULT 0x80045647 (0)
Invalid parameter @subid specified for sys.sp_MSmerge_log_idrange_alloc_on_distributor. HRESULT 0x0000523F (0)
The operation could not be completed.
- Finalizing Synchronization (Stopped)
- Saving Subscription Properties (Stopped)
Initially i thought it might be that some of the articles had primary keys that were of type nvarchar rather then Int thus resulting in no identity range being able to be assigned to those articles.
Test 1: I tried removing all articles that had nvarchar primary keys and left only one table that had an identity Int primary key colum. I then ran the snapshot agent. I then run through the Subscription Wizzard again and the error was the same.
Test 2: Then reading the error message again i tried those tables that didnt use identity columns and the wizzard completed successfully. Any idea what would be wrong with my articles that have identity columns. The article properties for the identity columns use the Identity Range Management defaults.
is there a setting that will ebnable uniform extent allocation uponcreation of index/table by default ?if there isn't any default setting can you code it in?thanks,Doron
i created a query and when i run it like this i get data but when i add a value in the 2ed case for '2%' i get error. Select a.email, case when a.reportnumber like '1%' then (select b.Reportnumber from ijasSummaryNo b where a.Reportnumber = b.Reportnumber) end as Reportnumber, case when a.Reportnumber like '1%' then (select b.stonebreakdown from ijasSummaryNo b where a.Reportnumber = b.Reportnumber) end as Measurement, case when a.Reportnumber like '1%' then (select b.reportcarddate from ijasSummaryNo b where a.Reportnumber = b.Reportnumber) end as ijasDate, case when a.reportnumber like '2%' then (select c.Reportnumber from appraisalsummaryblue c where a.reportnumber = c.reportnumber) end as imacsRepNo from t_RegisterInfoTemp a Query works fine like this but when i add this (the one marked bold i get error) case when a.reportnumber like '2%' then (select c.Reportnumber from appraisalsummaryblue c where a.reportnumber = c.reportnumber) end as imacsRepNo,case when a.reportnumber like '2%' then (select c.Measurement from appraisalsummaryblue c where a.reportnumber = c.reportnumber) end as Measurement2
This is the error. Server: Msg 4414, Level 16, State 1, Line 1Could not allocate ancillary table for view or function resolution. The maximum number of tables in a query (260) was exceeded.
I have two publications. Some of the data are the same on the two publications. Both are configured as follow : The identity range management is set to "automatic" and the tracking-level is set to "Column-level tracking". Until there, every things works fine.
But, if i'm deleting one of the publication and if i'm deleting one of the rows that were replicated on the two publications i'm getting the following SQL Exception : "Invalid object name 'dbo.MSmerge_repl_view_1CAD32C4FF904A3CA27518B0C4BFF716_70308DE2261C4EC784C56131902E7D1C'"
If i'm watching the status of the leftover replication through the replication monitor, i get this error message :
"Error messages: The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation. If a republishing Subscriber has run out of identity ranges, synchronize the republishing Subscriber to obtain more identity ranges before restarting the synchronization. If a Publisher runs out of identit (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199417) Get help: http://help/MSSQL_REPL-2147199417 The publisher's identity range allocation entry could not be found in MSmerge_identity_range table. (Source: MSSQLServer, Error number: 20663) Get help: http://help/20663"
I checked the given links but they're useless.
So I tried to reinitialize the subscription with the "use a new snapshot" option enabled without any success either. I did only obtain a new error message :
"The publisher's identity range allocation entry could not be found in MSmerge_identity_range table.
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 2.
Failed to pr"
I didnt have any idea to correct this issue, so I would appreciate any help.
I like to create an SQL view to divide amount 300,000 between 12 month starting from Month July 2014 to June 2015 as shown below
Amount Month Year 25,000 July 2014 25,000 August 2014 25,000 September 2014 25,000 October 2014 25,000 November 2014 25,000 December 2014 25,000 January 2015 25,000 February 2015 . . . .
Hello. I have received the follwoing error upon an attempt to Browse the Cube. All other tabs are functional, including the Calculations tab. We are running Windows Server 2003 SP2 and SQL Server 2005 SP2. Any suggestions would be greatly appreciated!
**EDIT** - Have confirmed SP1 for VS2005 is installed both locally and on server, also.
Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Microsoft Visual Studio)
------------------------------ Program Location:
at Microsoft.Office.Interop.Owc11.PivotView.get_FieldSets() at Microsoft.AnalysisServices.Controls.PivotTableFontAdjustor.TransformFonts(Font font) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdatePivotTable(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdateAll(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.InitialUpdate() at Microsoft.AnalysisServices.Browse.CubeBrowser.SupportFunctionWhichCanFail(FunctionWhichCanFail function)
I've been researching AWE to determine if we should enable this for our environment.
Currently we have a quad core box with 4 gb of RAM (VMware). OS: Windows 2003 std, SQL Server 2005 std. 3GB is not set but will be as soon as we can perform maintenance on the server.
I have read mixed feedback on AWE, either it works great or grinds you to a hault. I would assume that the grinding to a hault is due to not setting the min/max values correctly or not enabling the lock page in memory setting.
We only have one instance of SQL on the server and this box won't be used for anything else aside from hosting SQL services. We do plan on running SSRS off of this server as well.
1. Will running SSRS and enabling AWE cause me problems? Will I have to reduce the max setting by the SSRS memory usage or will it share and play nice?
2. How do I go about setting the Max value? Should it be less than the physical RAM in the box? Right now its set to the default of 214748364, even if I don't enable AWE should this default value be changed?
3. It seems that even at idle the SQL server holds a lot of memory and the page file grows. If I restart the process in the morning, memory usage in taskmon is at 600mb or so. By the end of the day, its up around 2gb. How can I track down whats causing this, should this even concern me?
4. The lock Page in memory setting worries me. Everything I've read on this seems to give a warning about serious OS and other program support degradation. In some cases to the point where they have to restore the settings on the server before they can bring it back up. What are your thoughts on this.
I have a Windows sever 2012 with sql server 2012 enterprise. Ram size is 22GB. Sometimes SQL sever takes 95% memory.My question, How to reduce memory size without killing any process because it's production server.So there are many background process is running. And,Is there any guides to learn why Memory is raise d so high and how to reduce it.
Hello, I understand that we should use SSMS -> Server Properties -> Memory to put a cap on the SQL server memory usage, therefore it gives some space memory for OS, this is based on the fact if the max memory is not specified, SQL will use whatever available memory and eventually crash the system.
My question is that when a server has SSIS and SSAS services installed along with the SQL service. Would the max memory setting covers the SSIS and SSAS memory usage, or the SSIS and SSAS has to shared the memory with OS?
I am running Visual Studio 2005. I have an SSIS Package which is consuming a huge amount of memory. During the execution of the package the memory keeps increasing. Until finally i get an Out of Memory exception. I have run this package using dtexec, and in the BIDS. No difference. I do have some script components and have added some code to get the assemblies in the current appdomain. I do see that one particular assembly is increasing on every loop. VBAssembly every time it hits the script component is increasing by 6, and along with it the memory is climbing. What is this VBAssembly being used for is there an update to SQL Server Integration Services that I need?
sql server 2000 is running on windows server 2003 ... 4gb of memory on server .... 2003 was allocated 2.3gb nd sql server was allocated (and using all of it) 1.6gb for total of approx 4gb based on idera monitor software ... all memory allocated betweeen the OS and sql server .... then 4 more gb of memory added for total now of 8g ... now idera monitor shows 1.7gb for OS and 1.0 gb for sql server ..... 'system' info shows 8gb memory with PAE ... so I assume that the full 8gb can now be addressed .... why are less resources being used now with more total memory .... especially sql server ..... i thought about specifying a minimum memmry for sql server but i amnot convinced that would even work since it seems that this 1gb limit is artificial .... it it used 1.6 gb before why would it not use at least that much now ??
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.