DB Design :: Database XYZ Has More Than 1000 Virtual Log Files Which Is Excessive
Jun 8, 2015
I am getting this massage in error log .
"Database XYZ has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files."
I am using sql server 2008r2.
View 5 Replies
ADVERTISEMENT
May 20, 2015
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
View 9 Replies
View Related
Jul 19, 2015
Im trying to upload 1000 txt files into one table in SQL. I'm using the following query, to upload one txt file at a time:
bulk insert [dbo].AAA_2013_2015
from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (4).txt'
with (firstrow = 2,
lastrow = ???,
fieldterminator = ';',
rowterminator = '0x0A')
I'm trying that the query skip the last row because gives me the following error:
Msg 4866, Level 16, State 1, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 17. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
know a command to skip the last row, something like lastrow= all-1...or something like that.
I also executed using MAXERRORS command...like this:
bulk insert [dbo].AAA_2013_2015
from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (15).txt'
with (firstrow = 2,
fieldterminator = ';',
MAXERRORS = max_errors,
rowterminator = '0x0A')
does not recognize MAXERRORS command, also tried to put a number of error instead of max_errors.
View 0 Replies
View Related
Oct 1, 2004
I am wanting to reduce the amount of Virtual Log Files I have. In reading through the Online Book Documentation, I realize that I have forgotten to move the Transaction Log Files to a different drive. Now that the server is in production, I wanted to get some input about the best way of making this change.
Can I just change the directory the log files are being written to in the DB properties without having any adverse problems occurring?
View 2 Replies
View Related
Nov 8, 2012
We are using sql server 2008 r2 standard with sp2. I have a 12 GB database in production server, log file was set to 150MB with increment of 10 percent, in last 4 years database size has gone from 2gb to 12 gb. I ran following command
DBCC LOGINFO
And found I have 150 plus rows(which means 150 plus virtual log files)
I increased the size of log file to 25 percent of data file which comes to be approx 3gb and also set auto growth to 20 percent ...
Additional info: we have a log shipping environment in production, i am taking log backups every 15 mins.. still number of virtual log files is same , why is that , how to bring them between number of 25-50 as thats the recommended.
View 5 Replies
View Related
Aug 9, 2007
Hi all,
I'm trying to get an understanding of a serious problem I have with a large DB in production. This is going to be obvious to someone (everyone probably) <bg>
I have a table which consists of numerous varchars and ints but also a Text type field. This table resides in a SQL 2000 Database. This DB currently has a data file size of 16Gb and a Transaction Log size of 17Gb. When I edit the table and increase the size of a Varchar field from 50 to 100 these files grow to more than double their size!
Why is this happening and how can I prevent this?
TIA
NozFx
View 1 Replies
View Related
Oct 14, 2015
Any good starting point to understand for a specific db, how many max VLFs are good to have so that it does not cause long startup or backup times?
Also, I need some calculation so that I can identify a best growth parameter I will setup for each database ?
I'm seeing the below msg in errorlog and curious to know the changes (right sizing/growth) to be done? As of now 100 MB of log file growth value is set (refer: [URL] ....)
Database BizTalkMsgBoxDb has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.
View 3 Replies
View Related
Oct 24, 2001
Is it safe to backup while the database is running 1000 transaction/sec.? If yes - why should I buy Veritas Backup Exec Server Edition and Veritas Backup Exec Online Backup Pack ?
Michael
View 2 Replies
View Related
Jan 25, 2014
I am using internet through a dongle connection and every time I connect to internet, the ip gets changed.
1. How to create a database when the ip is not constant?
2. Can I use DNS server ip as database engine?
3. Is there any way to get the system ip?
View 4 Replies
View Related
Aug 13, 2007
Not sure where to place this question...
We have a bunch of SQL databases that are used for a similar number of IIS sites (we are talking of close to 500 sites), dealing with a .NET e-learning application
While we manage to create a virtual folder for the e-learning content, we do have one database for each site. So, when it comes to update the whole platform, we are talking about running one script per site, which may take about 5 seconds per user per site. With a total of 300.000 users and sites, it may come to more than 17 days running the update script.
Is there a way similar to that on IIS to create a symbolic link from one table in one database to another table in a diferent database in a diferent or same server?
Thanks,
AG
View 5 Replies
View Related
May 13, 2015
I have a scenario like below
Product1
Product2 Product3
Product4 Product5
Product1 1
1 0 0
1
Product2 1
1 0 0
1
Product3 0
0 1 1
0
Product4 0
0 1 1
0
Product5 1
1 0 0
1
How to design tables in SQL Server for the above.
View 2 Replies
View Related
Jul 14, 2015
In my enviornment i have one database with 6 ndf files and 5 ldf files and one mdf file.Actuvally what i am looking is to merge 6 ndf files into one ndf file and 5 ldf files into one ldf file.is it possible to do like this? , i tried using MOVE and TO option while backup is restorating but getting below error messages.
ERROR:
Msg 3176, Level 16, State 1, Line 4
File 'J:NDFabc.ndf' is claimed by 'Finance_data2'(4) and 'Finance_data1'(3). The WITH MOVE clause can be used to relocate one or more files.
View 4 Replies
View Related
Feb 23, 2006
Hi,On my laptop I am running Virtual Server 2005 with 2 x Windows 2003Servers. Both the VS can access the internet and shared files on thehost laptop. On the host laptop I have a SQL Server 2000 running.I have written a Windows Service to detect Application Errors and fireduplicates of events to the database, and this service sits on both VS.It does work as I have tested it on host. BUT everytime I makeapplication errors on the VS nothing is deposited into my database.Very frustrating!I have also tried to create a ODBC connection through the Data SourceAdministrator, and when I enter the login credentials for SQL I getconnection error messages.Any suggestions would be great!ThanksSun
View 3 Replies
View Related
Jul 9, 2015
I want to control the size of ldf files and mdf files of several databases on SQL Server 2008 in my organization (manual increase), but i have a question:
What would be the best practices (best methods) for provisioning a ldf file and mdf file? Exists any generic formula?
With this i want to avoid the shrink operation and the autogrow of sql server databases...
View 6 Replies
View Related
May 26, 2015
I have a database which has log file size 300 GB. As the drive is filling up i need to clear the space on the drive, for that i have to shrink the log file.
Unfortunately i dont have option to take backup of the database.And i am not able to shrink the file now. Is there any way to shrink the log file with out taking backup of it ?
View 5 Replies
View Related
May 26, 2007
When I try to add a (SQL Server 2005 Express Edition) database to my project (I'v tried a windows application and an ASP.NET application) in Visual Studio 2005 Professional Edition; It fails with an error like this :
"create database failed. some file names listed could not be created (...)"
but when I open the project from the real path It works.
Is this a bug? Is there any solution? (In many situations there is a need for working with virtual drives. There must be some work around ...)
Thanks
View 1 Replies
View Related
May 11, 2015
I tried samples of FileTable in SQL Server 2012 to store files in database. I have following questions.
1. When I right click on FileTable, and say Explore File Table Directory, I was taken to a directory. But if I search the same directory in my server/machine, the directory is not available.
2. Can I use some external server/disk as a directory for FileTable?
3. How easy is to use FileTables using .Net web applications?
View 2 Replies
View Related
Feb 13, 2002
Ok, I'm doing a football database for fixtures and stuff. The problem I am having is that in a fixture, there is both a home, and an away team. The tables as a result are something like this:
-------
Fixture
-------
fix_id
fix_date
fix_played
----
Team
----
tem_id
tem_name
-----------
TeamFixture
-----------
fix_id
tem_id
homeorawayteam
goals
It's not exactly like that, but you get the point. The question is, can I do a fixture query which results in one record per fixture, showing both teams details. The first in a hometeam field and the second in an away team field.
Fixture contains the details about the fixture like date and fixture id and has it been played
Team contains team info like team id, name, associated graphic
TeamFixture is the table which links the fixture to it's home and away team.
TeamFixture exists to prevent a many to many type relationship.
Make sense? Sorry if this turns out to be really easy, just can't get my head around it at the mo!
View 2 Replies
View Related
Feb 11, 1999
Help !!
I am running a database of 500-600mb 20-30% of which is new data daily (5 day old data being deleted as part of the nightly maintenance) And my nightly maintenance is regularly taking an hour plus.
CheckDB, New Alloc, Catalog, re-indexing and dumps are performed nightly (2am ish) and as the system is in constant use I cannot afford such a long task. I can't use weekly dumps/checkDB as we use transaction log replication and these are dumped every minute. I really need some suggestions on how I can improve matters. The deletion of old data in particular is taking a long time due to the use local variables but is there a faster way to do this :
OPEN tnames_cursor
FETCH NEXT FROM tnames_cursor INTO @connectionid
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
Select @dRent = DeliveredRetention from ControlDB..connectiontable
where ControlDB..connectiontable.Cid = @connectionid
Delete from MyDB..Table where Cid = @cid
and DateDelivered != NULL
and Datediff(hh,MyDB..Table.DateDelivered,getdate()) >= (@dRent*24)
END
FETCH NEXT FROM tnames_cursor INTO @connectionid
END
DEALLOCATE tnames_cursor
GO
These jobs have also started running out of locks and deadlocking on occaision which seems odd as the system has 10000 available (escalating at 2000)
Any Suggestions would be very much appreciated
Damon
View 1 Replies
View Related
Nov 27, 2006
Hi there group.
Could some please point me in the right direction?
We have a database and it's about 28GB in size, recently the SQL server process that runs uses approximately 1.6GB of Memory.
I have tried running SQL profiler to find out which Stored Procedure is causing this but came up unsuccessful.
When restarting SQL the process it run's at about 50MB for about 20sec and then starts climbing up to 1.6GB of memory usage.
Please assist.
View 12 Replies
View Related
Apr 16, 2008
I'm running into a blocking problem on my SQL 2000 server. I have a table that is frequently read/written to (inserts, updates, deletes) -- I don't place any explicity locks but I do a SELECT @@Identity after I insert a record to get the Identity value via a sqlCommand.ExecuteScalar.
So my questions:
#1 Is blocking normal? (40-90 blocks consistantly - 350 or so client connections)
#2 Is there any better coding solution to avoid blocks?
#3 I need to get the Identity value after the recorded is added and I thought ExecuteScalar is the fastest and least overhead, put perhaps I'm wrong?
Any suggestions or hints welcome.
Thanks, Rob.
.NET 2.0
View 4 Replies
View Related
Mar 16, 1999
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
Thanks in advance.
View 1 Replies
View Related
May 23, 2006
Hi there,
Currently using SQL Server 2000 (SP4). The following condition started occurring last week:
- Server has excessive blocking
- Majority of the processes are in runnable state
- Excessive blocking happens for a few mins. and repeats again during the day. Does not happen at night.
- Nothing on the server errorlog, profiler
- CPU averages 40 - 50% at that point of excessive blocking
Any help would be greatly appreciated.
Thanks.
View 7 Replies
View Related
Jun 25, 2007
Since the other related topic is closed/answered...
The Short version:
SQL is now logging too much info with every package. The volume of the new "User: Diagnostic" event has caused some packages to fail and the command-line exclusion option appears to have no effect on the events logged to the SQL provider. Is this a bug in dtexec or am I using the wrong syntax to exclude log entries? I don't want to modify all of my SSIS packages...
More Info:
SQL SP2 introduced new logging events, most of which appear to get logged by default. So far, none of our packages have used any sort of explicit logging configuration; it's all been set at the command line using a syntax like shown below:
dtexec.exe /FILE "D:SSIS PackagesMyAppVendors.dtsx" /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REP E;Diagnostic /LOGGER "{6AA833A1-E4B2-4431-831B-DE695049DC61}";"MyDBConnName"
This does appear to correctly limit what gets logged to the console (and thereby the SQL Agent's job step log), but has no effect on what's logged to the database. Normally, I'd use /REP EWDCI, but I was attempting to limit the log entries to Errors only.
I first came across this error when a package failed, but it only logged the following to the console with nothing in sysdtslog90 (while not the "latest/greatest" server, this is a relatively low-utilized quad 2.8ghz xeon ProLiant DL580 G2):
Error: 2007-06-21 06:01:30.45
Code: 0xC0202009
Source: MYPACKAGENAME Log provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}"
Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Connection is busy with results for another command".
End Error
Error: 2007-06-21 06:01:30.46
Code: 0xC0014010
Source: MYPACKAGESTEP Load
Description: The SSIS logging provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}" failed with error code 0xC0202009 ((null)). This indicates a logging error attributable to the specified log provider.
End Error
I changed this one package to only log OnError events, but I'd rather not have to change every package to do the same, plus I'd like the ability to easily turn on verbose or any other logging level when needed.
View 1 Replies
View Related
May 22, 2006
have a 3rd party sql 2000 app, mostly bad sql. have lock issues, when monitoring sql locks/req per second, I get normally between 500,000 and 1,000,000 requests. For a 4 way box with 16 gig of memory, what is considered an excessive amounts of locks.
View 3 Replies
View Related
Apr 18, 2000
I have got SQLv6.5 SP5a with SMS1.2 SP4 on seperate Alpha boxes. I have automated the backups so they are scheduled for after hours. SMS gets backed up first and TEMPDB shortly afterwards. However, since a back log in SMS MIFS has happened, the TEMPDB backup displays of 100,000pages backed up. When you back it up on its own, it only shows 170+ pages.
The SMS DB is 600MB in size, the Log is 210MB, Open objects is 5000, and TEMPDB is set 210MB on its own device.
Any ideas
View 1 Replies
View Related
Jul 23, 2005
Hello!I am trying to investigate strange problem with particular storedprocedure. It runs OK for several days and suddenly we start gettingand lotof locks. The reason being [COMPILE] lock placed on this procedure. Asaresult, we have 40-50 other connections waiting, then next connectionusingthis procedure has [COMPILE] lock etc. Client is fully qualifyingstoredprocedure by database/owner name and it doesn't start with sp_. I knowthese are the reasons for [COMPILE] lock being placed. Is theresomethingelse that might trigger this lock? When troubleshooting this issue, Inoticed there was no plan for this procedure in syscacheobjects. Thestoredprocedure is very simple (I know it could be rewritten/optimized butourdeveloper wrote it):CREATE PROCEDURE [dbo].[vsp_mail_select]@user_id int,@folder_id int,@is_read bit = 1, --IF 1, pull everything, else just pull unread mail@start_index int = null, --unused for now, we return everything@total_count int = null output, -- count of all mail in specifiedfolder@unread_count int = null output -- count of unread mail in specifiedfolderASSET NOCOUNT ONselect m1.* from mail m1(nolock) where m1.user_id=@user_id andfolder_id=@folder_id and ((@is_read=0 and is_read=0) or (@is_read=1))orderby date_sent descselect @total_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and ((is_read=0 and@is_read=0)or (@is_read=1))select @unread_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and is_read=0GOI was monitoring server for a couple of day before and I am not surewhythis happens every 3-4 days only!Any help on this matter would be greately appreciated!Thanks,Igor
View 1 Replies
View Related
Jul 31, 2007
We are running SQL Server 2000 Enterprise Edition on a 2-node cluster with IIS/ASP.NET front-end hosting 150-200 active connections. There is a SVCHOST process running under LOCAL SERVICE account - hosting the Remote Registry process that is using only 4,200K but is page faulting 200-500 times per second. I realize this process is used for failover, but the page fault seems excessive. Any thoughts on this?
The servers are running Windows Server 2003 with 4 processors and 4gb RAM.
View 1 Replies
View Related
Aug 3, 2007
We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.
We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.
We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.
Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?
Thanks,
Geoff
WA POLICE
View 5 Replies
View Related
Oct 29, 2015
I actually work in an organisation and we have to find a solution about the data consistancy in the database. our partners use to send details to the organisation and inserted directly in the database, so we want to create a new database as a buffer database to insert informations from the partners then make an update to the main database. is there a better solution instead of that?
View 6 Replies
View Related
Feb 24, 2006
Hello everyone,I have a webcontrol that uses database-structures alot, it uses the system tables in SQL to read column information from tables. To ease the load of the SQL server I have a property that stores this information in a cache and everything works fine.I am doing some research to find if there are anyway to get information from the SQL server that the structure from a table has changed.I want to know if a column or table has changed any values, like datatype, name, properties, etc.Any suggestions out there ?!
View 3 Replies
View Related
Jul 23, 2005
I have a system that basically stores a database within a database (I'msure lots have you have done this before in some form or another).At the end of the day, I'm storing the actual data generically in acolumn of type nvarchar(4000), but I want to add support for unlimitedtext. I want to do this in a smart fashion. Right now I am leaningtowards putting 2 nullable Value fields:ValueLong ntext nullableValueShort nvarchar(4000) nullableand dynamically storing the info in one or the other depending on thesize. ASP.NET does this exact very thing in it's Session State model;look at the ASPStateTempSessions table. This table has both aSessionItemShort of type varbinary (7000) and a SessionItemLong of typeImage.My question is, is it better to user varbinary (7000) and Image? I'mthinking maybe I should go down this path, simply because ASP.NET does,but I don't really know why. Does anyone know what would be the benifitof using varbinary and Image datatypes? If it's just to allow saving ofbinary data, then I don't really need that right now (and I don't thinkASP.NET does either). Are there any other reasons?thanks,dave
View 7 Replies
View Related
Aug 16, 2007
Hi All,Can u please suggest me some books for relational database design ordatabase modelling(Knowledgeable yet simple) i.e. from which we couldlearn database relationships(one to many,many to oneetc.....),building ER diagrams,proper usage of ER diagrams in ourdatabase(Primary key foreign key relations),designing smallmodules,relating tables and everything that relates about databasedesign....Coz I think database design is the crucial part of databaseand we must know the design part very first before starting up withdatabases.....Thanks and very grateful to all of you....Vikas
View 3 Replies
View Related