Hi, I was aware of an idea that I want to share with you guys:
Here's the thing, I have an Access DB that I will pass to SQL, optimizing its structure and data, to optimize then its performance. About this last issue, the present Access DB has some consultations very redundant. Take this example as "source code":
Consultation A is made by selecting some fields with some conditions on consultations A and B. I want to end this. Consultation A is consultation A, B is B and C is C, they all are made by themselfs.
However, a new idea came to light! This data is to be displaied on a Web application on VB.Net, and I want to show the results from consultation A, B and C, once per time, but all sequencially. What if I execute two commands, one reading consultation A (by a stored procedure), another to consultation B, and save that data in an arraylist of objects, and then generate consultation A based on those application objects that are alread instaned and ready to use, selecting the final data on those objects and not in the database?
Am I making any mistake, or am I optimizing performance somehow? The final purpose is really getting the most performance as we can!Any tips on this?Thanks a lot!
Does anyone know how to improve performance on insert statements. I have to run a query of several thousand insert statements, but it just takes too long. Does anyone know of any good tips to improve performance?
Hello all,I've following problem. Please forgive me not posting script, but Ithink it won't help anyway.I've a table, which is quite big (over 5 milions records). Now, thistable contains one field (varchar[100]), which contains some data inthe chain.Now, there is a view on this table, to present the data to user. Theproblem is, in this view need to be displayed some data from this onelarge field (using substring function or inline function returningvalue).User in the application, is able to filter and sort threw this fields.Now, the whole situation starts to be more complicated, if I would likecombine this table, with another one, where is one additional much morlarger field, from which I need to select data in the same way.Problem is: it takes TO LONG to select the data according to userrequest (user access view, not table direct)Now the question:- using this substring (as in example) is agood solution, or beter todo a inline function which will return me the part of this dataset(probably there is no difference)- will it be much faster, if i could add some fields in toSource_Table, containing also varchar data, but only this part whichI'm interested in and binde these fields in view instead off usingsubstring function?Small example:CREATE TABLE [dbo].[Source_Table] ([CID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[MSrepl_tran_version] uniqueidentifier ROWGUIDCOL NULL ,[Date_Event] [datetime] NOT NULL ,[mama_id] [varchar] (6) COLLATE Latin1_General_CI_AS NOT NULL ,[mama_type] [varchar] (4) COLLATE Latin1_General_CI_AS NULL ,[tata_id] [varchar] (4) COLLATE Latin1_General_CI_AS NOT NULL ,[tata_type] [varchar] (2) COLLATE Latin1_General_CI_AS NULL ,[loc_id] [nvarchar] (64) COLLATE Latin1_General_CI_AS NOT NULL ,[sn_no] [smallint] NOT NULL ,[tel_type] [smallint] NULL ,[loc_status] [smallint] NULL ,[sq_break] [bit] NULL ,[cmpl_data] [varchar] (100) COLLATE Latin1_General_CI_AS NOT NULL ,[fk_cmpl_erp_data] [numeric](18, 0) NULL ,[erp_dynia] [bigint] NULL) ON [PRIMARY]GOcreate view VIEW_AllDataasselect top 100 percentisnull(substring(RODZ.cmpl_data,27,10),'-') as ASO_NO,(RODZ.mama_type + RODZ.mama_Id) as MAMA,isnull(substring(RODZ.cmpl_data,45,5),'-') as MI,isnull(substring(RODZ.cmpl_data,57,3),'-') as ctl_EC,isnull(substring(RODZ.cmpl_data,60,3),'-') as ctl_IC,RODZ.Date_Event as time_time,RODZ.sn_no as SNFROMSource_Table RODZ with (nolock)goThanks in advanceMateusz
I'm not sure if this is the right forum, but I have a general question about running/storing databases. I have been running a process with 60+ million records in one table and another 16 million in another table and it is taking forever to get everything imported in and run the appropriate queries. I've been doing this all on a desktop and I am anxious to learn of a more efficient, faster method of processing this amount of data.
What solution should I pursue if I am doing this work a few times a year so that it doesn't take three full days of processing to reach an answer with the data?
I've received conflicting information from Microsoft personnel so thought I'd see what some thoughts here are.
In summary, we upgraded a server from SQL Server 2000 SP4 Standard to SQL Server 2005 SP2 Standard. This servers main purpose is to handle alot of merge replication to anonymous pull subscribers. We have some Transactional replication also occurring. There are 8GB memory on the server.
During the upgrade we ran into memory pressure on MemToLeave. We put the /3GB parameter in boot.ini and -g512 on the startup per Microsoft's suggestions. This got us past the upgrade process.
After the upgrade, we took off the boot.ini setting and the -g512. We enabled AWE and assigned 6GB to SQL Server. Then once in a while when the merge snapshots were running, we'd receive some "system out of memory" errors. I went ahead and put -g512 back on and haven't received the error since.
My question to Microsoft then was if we go to say 16GB of memory on the box and give say 14GB to SQL Server, would it be beneficial to set the -g option to a higher number. That's when I got into a discussion with the Microsoft person that SQL Server 2005 Standard would not use anything above 4GB, which is opposite what the Microsoft site says, others have said, and opposite to what I'm seeing for memory usage with DBCC MEMORYSTATUS showing the 6GB being used We'll be talking to our TAM about our suppport, specifically on Replication topics, as we've had some problems getting knowledgable support on this topic. If anyone knows of support outside of Microsoft on Replication topics, I'd love to hear about it.
Any thoughts on the tweaking of memory related to our environment? I know it may be site-specific and we may have to do some trial and error, but with:
1. Doing heavy merge replication processing on the server(1,500 subscribers). 2. say we get 16GB on the box(server is Windows 2003 SP2 Enterprise)
are there some suggestions on a -g setting to best utilize Buffer Pool and MemToLeave ? Some other things to do? Is there some process/method to help determine how best to define the memory settings? If there a way to see how much BPOOL and/or MemToLeave the system is using at a given moment? DBCC MEMORYSTATUS gives alot of info, but I'll be the first to admit that I don't know what alot of the info there is really telling me. If there some white paper, etc that would help determine what the system is doing memory-related, that'd be great to know.
one of our clients is running out of disk space on the SAN and I was simply wondering if it's possible to increase the disks on the fly without any major problems...? Should we take any special precausions? It's a clustered Win2k3 64-bit server with SQL Server 2005 Ent Edition...
Hi, i'm using SQL Server Express (9.0.2047). My database's *.ldf file's size increasing everyday... How can i decrease it? How can i make ldf file to small ?
Can I know somedetail about why the Sqlservr.exe app increasing in size drastically. Even I check all parameter of the server and I check the process running on server.
I feel server is not releasing the queues and It is occupying the memory. I any one suggest what could be the cause ?
I have increased the number of connections my Sql server will allow, and now I cannot restart my SQL Server, it keeps creashing and giving me an error message, has anybody else come across this, or know how I can restart my SQL server so I can atleast do a bit of work today!
I am having trouble increasing the size of the log device on a SQL 6.5 database. When I use SQL Enterprise Manager I get an error saying that the device has 0 MBs available. When I use the ALTER DB statement I get an error saying that there is not enough space on the disk, but I know this not to be the case. Has anyone any suggestions? Thanks in advance, Michael lawlor
I have faced a network problem during some days, what forced one of our replications to be stopped. The Publisher database is a high volume database. After I re-started the replication, the Subscirber database has its transacting log size increased quickly, because of the high volume of information to be inserted.
My concern is the way it is working, there will be no enough space for the log or for its backup files.
So, I have created a TSQL job within the following commands:
BACKUP LOG database_name WITH TRUNCATE_ONLY DBCC SHRINKDATABASE (database_name,TRUNCATEONLY)
It's running every 20 minutes, however the transaction log remains increasing.
I have also changed the db_option "SELECT INTO/BULKCOPY" to TRUE, in order to avoid logging bulk copies, but I believe, it didn't work because it didn't apply to replication process.
Does Anybody know if I can disable the transaction log or avoid this incresing of size during the replication?
Thanks a lot! Regards, Felicia Schimidt felicia.schimidt@br.flextronics.com
I have a database with 1 .mdf data file and 1 .ldf t-log file. There are multiple inserts/deletes/transactions performed on the data daily, but the size of the two files remains constant (5,774,,336 and 153,480 respectively)??? I perform daily full backups and hourly T-log backups (during business hours of 9-6) and these backup files change size, but why aren't my physical DB files changing? I have them set to auto-grow at 10% unrestricted...
I`m using IIS7..At present, our company has come not a difference indeyo of operating the server using Classic ASP recently we met a big problem : session is increases indefinitely. 503.3 - ASP.net has a queue is full. The phrase occurs continuously. Is there any solution for check problems? or how can i see what source code page makes the problem (ex. 123.asp : session 300. 234.asp : session 4000).
I am currently migrating a DB from Oracle to SQL Server (Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86))
I've used ssma to do the migration, and I'm reviewing the prodedures to check them. I have find a performance problem in one of them, which worked perfectly under Oracle, and I have tried lots of things with no luck, so I guess I need some help
I insert a row in a table, and the time it takes for this is fine, but seconds later I need to read the row, and this Select lasts 1-2 ms more every time. This process is repeated lots of times.
Every insert-select takes 200 ms when it receives the first data (including some other operations that are not increasing the response time), and 200 insertions later it takes about 500 ms, which is really too much, considering it keeps increasing.
The table has 25+ columns, and some of them contain varchar of 3000+ characters.
I make the select using 4 columns in the where part. One of them is a numeric, and the rest are varchar (no one is the primary key).
I've got a clustered index for the primary key, and two more non-clustered indexes. One of them refers to the columns I use in the Select, and the parameters are Fill Factor: 90, and Recompute Statistics Automatically.
Would anyone have any suggestions/advice on how to determine what is causing the memory usage for sqlserver.exe to increase a dramatic pace in windows task manager? What would be a good resolution to slow down this memory usage? Thanks!
Hi all, Our Production server has 4GB RAM and is running SQL Server 7.0. By default since SQL Server 7.0 Standard Edition can take up only less than 2GB, our SQL Server is now using only 1.8GB (leaving the rest for the OS, Windows 2000 Server).
Inorder for SQL Server to take advantage of more than 2GB of RAM it is suggested that boot.ini be modified to include the switch, /3GB
Has anyone seen any issues with doing this? Is it safe to do so on the Standard Edition of SQL Server 7.0?
Is there a way to increase the size of the procedure cache. Or is it only a auto configuring option. I have 2gb of memory, and when I check the size of the procedure cache it is just 10mb. I would like to increase this to around 50mb. Not sure if there is an setting to do this. Had a look on BOL could not find anything.
Out techs informed me that they are getting reports of a system slow down. When they look, they find sqlserver.exe has lots of memory allocated to it. They reboot the server and then it runs okay for a few weeks. They tell me this just started happening recently.
SQLServer itself has not been touched in months. They are, however, starting to use one of the databases heavier.
I found a setting where you can set max_server_memory. Any problems if I set this to a value?
Just wondering if you could help me on this one. I'm not sure if mytransaction logs are behaving oddly or what. I've successfuly managedto shrink my transaction logs from 7GB down to 1MB and now I find itstrange that the log file doesn't seem to increase its size. Thetimestamp of the logfile is updating as well. But the size of the fileis constant. I haven't configured my database to do auto-shrink so Imreally confused why it hasn't changed its size for more than a monthnow.Hope I'm not losing any data here.Kindly advise.
i have a very big database and number of people are working on it.. it's log file size is increaseing very day too much.. i am taking log back every 30mint...
i dont' know that wethare i need to truncate the log file after taking the log file backup or not.. i am taking differentail backup every day and full backup every week...
Please tell me do i need to trucate the log file to reduce the file size or i hv to leave as it....
I am using SQL Server 2000 with replication object for two location. Log size on publisher go upto 25 times of data file size, I mean 80 MB Data files has maintains 2 GB log file and it is same for all five co's working on same windows 2000 advanced server board.
Since last week server randamly get disconnected from user applications and at that time few tables are not openable at server.
Can any one give a reason ? Why this type misbehaviou done by SQL Server 2000?
SELECT session_id, SUM(internal_objects_alloc_page_count) AS task_internal_objects_alloc_page_count, SUM(internal_objects_dealloc_page_count) AS task_internal_objects_dealloc_page_count FROM sys.dm_db_task_space_usage where internal_objects_alloc_page_count >10 and session_id> 50 GROUP BY session_id;
[Code] ....
Database MDF is 27806 MB and I tried to shrink but unable to shrink. It is production server.I do not want Restart sql server.There is no open transaction.
I'm having a problem with memory and cpu increasing after I enable the broker on my database. Before I enable the broker, my memory will be at 70 mb, but after I enable broker, memory increases to 100, 200, and keeps rising even until 700 mb; plus the sql server process uses more of the CPU, ranging between 30-40 percent.
Any ideas what could be going on, and possible solutions? Thanks
The code below is almost there but falls over on Recordno 8:
select curr.recordno,curr.speed ,CASE WHEN curr.speed >= ISNULL(prev.speed,0) THEN curr.speed ELSE ( SELECT MAX(speed) FROM speedtest WHERE recordno between (CASE WHEN curr.speed >= prev.speed then curr.recordindex else prev.recordno end ) and curr.recordno
Hi guys , is there any ways/suggestions for strengthen up the security for SQL server 2005 ? Due to several attacks from unknown places to my database's server , so I would like to get a way for increase the SQL security. Hope able to gather some info from web as well. Thx a lot guys.