i am new to visual studio 2008, but comprehend pretty well. (i use to steal code from .bas files and whatnot in 5th grade in vb4 for aol ''progz'') anyway, all i wanna do is create a simple single form window as my first project with vs 2008. i have a huge list of movies, and i would like a form to help me list what titles i have, and their location (friends house, cd book 1, book 2, movie tower, etc.) and i would like to be able to add to the list, delete, edit location, from my form. right now i got as far as designing my interface, and started to add a database. i cant even get pass that because..
i goto add item from my menu bar..
add item..
local database..
then the configuration wizard comes up and says an error occured while retrieving the information from the database:
I need to run an SQL job if a specific file exists. Actually, that specific files gets created different time in different day. I need to run the job when the file arrives. Is there any way to do this in SQL? I have done this in seagate scheduler but I need to do it in SQL. Please help!!!!!!!
Hi, does anyone know how to access the *.evt file (SysEvent.Evt) from SQL in order to import a data from a file into a server table?
When you run EventViewer there is an option to save the log file as *.csv file and then use it. I want to eliminate this step (or make it automatic) and get the data into SQL right away.
I have a SQL Server that generates a reasonable amount of log data in the SQL Event logs. I'm finding it difficult to use this data as the individual log files become rather large i.e. several hundred MB before they rotate.
What I would like to achieve: Set a constraint like: Each log will rotate when it reaches [XX]Mb in size and the files will be kept through [Y] rotations.
I can't seem to find how to do this through Enterprise Manager. I've had a decent look through google and there seems to be no info on this forum....can anyone point me in the right direction?
My SQL Server 2005 SP4 on Windows 2008 R2 is flooded with the below errors:-
Date  10/25/2011 10:55:46 AM Log  SQL Server (Current - 10/25/2011 10:55:00 AM) Source  spid Message Event Tracing for Windows failed to send an event. Send failures with the same error code may not be reported in the future. Error ID: 0, Event class ID: 54, Cause: (null).  Is there a way I can trace it how it is coming? When I check input buffer for these ids, it looks like it is tracing everything. All the general application DMLs are coming in these spids.
I have been testing with the WMI Event Watcher Task, so that I can identify a change to a file. The WQL is thus:
SELECT * FROM __InstanceModificationEvent within 30 WHERE targetinstance isa 'CIM_DataFile' AND targetinstance.name = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\AdventureWorks.bak'
This polls every 30 secs and in the SSIS Event (ActionAtEvent in the WMI Task is set to fire the SSIS Event) I have a simple script task that runs a message box).
My understanding is that the event polls every 30 s and if there is a change on the AdventureWorks.bak file then the event is triggered and the script task will run producing the message. However, when I run the package the message is occurring every 30s, meaning the event is continually firing even though there has been NO change to the AdventureWorks.bak file.
Am I correct in my understanding of how this should work and if so why is the event firing when it should not ?
Server 2003 SE SP1 5.2.3790 Sql Server 2000, SP 4, 8.00.2187 (latest hotfix rollup) We fixed one issue, but it brought up another. the fix we applied stopped the ServicesActive access failure, but now we have a failure on MSSEARCH. The users this is affecting do NOT have admin rights on the machine, they are SQL developers. We were having
Event Type: Failure Audit Event Source: Security Event Category: Object AccessEvent ID: 560 Date: 5/23/2007 Time: 6:27:15 AM User: domainuser Computer: MACHINENAME Description: Object Open: Object Server: SC Manager Object Type: SC_MANAGER OBJECT Object Name: ServicesActive Handle ID: - Operation ID: {0,1623975729} Process ID: 840 Image File Name: C:WINDOWSsystem32services.exe Primary User Name: MACHINE$ Primary Domain: Domain Primary Logon ID: (0x0,0x3E7) Client User Name: User Client Domain: Domain Client Logon ID: (0x0,0x6097C608) Accesses: READ_CONTROL Connect to service controller Enumerate services Query service database lock state
We recently upgraded to SQL 2005 from SQL 2000. We have most of our issues ironed out however about every 1 minute there is a message in the Application Event log and the SQL log that states:
EVENT ID 18456 Login Failed for the users DOMAIN/ACCOUNT [CLIENT: <local machine>]
This is a state 16 message which I thought meant that the account does not have access to the default database. The account is actually the account that the SQL services run under.
Any ideas? We can't seem to figure this one out. We actually upgraded to 2005 from 2000 and had an error appear after every reboot that prevented the SQL Agent from running(This application has failed to start because GAPI32.dll was not found. Re-installing the application may fix this problem.) We did a full uninstall of SQL and reinstalled fresh and restored the databases from .bak files and that is when the EVENT ID 18546 started occuring every minute.
We don't have any SQL heavy hitters here so please be detailed with any possible solutions. That you very much for any help you can provide!
We set the database to : automatically grow file, unstricted file growth, grow by 10%. As I understand it will automatically grow, why we recieved error message like primary file full.
Hi, I´m working with a sql server 2000 bd and i have a bd with simple recovery model. Each day i have the next error:
"The log file for database x is full. Backup the transaction log for the database to free up some log space"
I tried to limit the transaction log file to 500Mb but then I have this error. I have done the reduction manually of transaction log file but the next day i have got the same error. If i don´t try to limit, this file grows a lot of (1GB) and then i haven´t got enough disk space. Can you help me, please?
I have set the initial size of the log file for a database to 1M, themaximum size is unrestricted, and the increase rate is 10%.However, when I attempt to delete thousands of rows, the error is stillreported that the transaction log file is full. Why can't the log fileincrease automatically?*** Sent via Devdex http://www.devdex.com ***Don't just participate in USENET...get rewarded for it!
My Tempdb log file is getting full very frequently. I could see that tempdb log file is not getting truncate automatically since checkpoint is not occuring as execpted.
If a shrink the tempdb its getting truncated immediately and releasing the full occupied space.
So i come to an conclusion that auto checkpoints are not happening even though the tempdb is in SIMPLE recovery model.
I search in KB and could find the article related to this error.
http://support.microsoft.com/kb/909369/en-us
I would like to get it confirmed is the article described is the same issue i am facing. Also if you could let me know the hot fix details for this, that would be great.
Weird. I have an Agent job that populates some warehouse data every night. It's been working fine for over a year. Then it fails with a message that thePRIMARY file group is full. The database is set to Simple recovery, automatic, unrestricted growth by 10% for both the log file and the data file. The drive it's on has 96GB free. The data file is 1.5GB and the log is 2MB and there's not very much fluctuation in the amoutn of data going into it. Anybody have an idea why that would happen? Thanks for any insight,Pete
I got a message that 'Could not allocate space for object '(SYSTEM table id: -732777483)' in database 'TEMPDB' because the 'DEFAULT' filegroup is full. Connection Broken'
I need to find out when the data file and transaction log file is full. Is there any stored procedure that will let how much space left. We don't want to set Autogrow for the files.
I am receiving the below message however when going into my database properties and going into 'File' it's set as either unrestricted growth for the log files or 2097152MB limit and the log files are only taking up about 3gigs.
Could not allocate new page for database. There are no more pages available in the file group.
Database log file is full. Back up the transaction log for the database to free up some log space.
Could not allocate space for ojbect in database because the filegroup is full.
there is a sql job that failed yesterday. This job calls a store procedure. This store procedure doesn't use any temp table. But there are lots of updates and inserts clauses.
application log shows: Error: 9002, Severity: 17, State: 2 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. ---------------------------------------------------------------- tempdb.mdf 1.37gb templog.ldf 19.6 mb
these files are located on D: drive and D drive has 52gb free space
I'm new to the DBA world, and have no one else in the company to look up to. Does anyone know what I might need to check out or do when the Data File Size is 204% full? Or is this not necessarily a bad thing?
I'm getting this from a Diagnostic tool I have.
The number of tables is 148 Data file size 35,941 MB Data Size 26,549.92 MB Index Size 177,130.02 MB Log File Size 5.05 MB
Primary file group is full for one user db This is production server Total db size is 132186 MB data file is 12000 MB, set automatically grow and restrict file growth is 121024 MB
Now there is 30 GB space on drive
I have added space to file 3 times(1GB, 10GB, 3 GB) and incresed restrict file growth also(5GB). database is in simple recovery mode.
stille I am getting above error, pleasee let me know how to proceed now?
HiI am getting this common error once or twice a day:Error: 9002, Severity: 17, State: 2The log file for database 'tempdb' is full. Back up the transactionlog for the database to free up some log space.provided......1. My log file drive has more than 20 GB free out of 30 GB2. Both data file & log file has default setting on unrestricted filegrowth by 10%3. Currently we moved from SQL 7.0 to SQL 2000 & the load in the userside also doubled4. We can't do the temporary solution like restarting the server orSQL service, because the application is a real time system with muchless manual interaction.Thanks in advance.RegardsSeni
Dear SirI have problem which I am describing below:-In my databse , size of Data file is 40 MBsize of Log file is 4158 MBwhich is causing the size of my database so hugewhy this log file is so huge ?How I can control it, I have already set Parameter for Data and Log filei.eUnrestricted Growth and Grow Automatically 10-PercentI want to control Log file Size.Regard'sVineet Bisht*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
Hi there,I've just run some DTS packages on my test sqlserver (Which has limitedhard disk space and memory) and all the tasks have failed, due to'PRIMARY' file group is fullIs there a query or script I can run to resolve this problem??M3ckon*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
hi,my sql database log file has been fulled recently ..... becuasethere are 55 millions records in main 3 tables .... so how i can emptylog file ...i don't want to attach new log file or save any pervious log info.....thanks for helping me ... and my company ..Abdul SalamSr. DBA + ProgrammerXebec Groups of Business.
Primary file group is full for one user db This is production server Total db size is 132186 MB data file is 12000 MB, set automatically grow and restrict file growth is 121024 MB
Now there is 30 GB space on drive
I have added space to file 3 times(1GB, 10GB, 3 GB) and incresed restrict file growth also(5GB). database is in simple recovery mode.
stille I am getting above error, pleasee let me know how to proceed now?
Customer site has detached their database and deleted the .ldf file and reattached because they want to decrease the log file size. They set their log size to restricted file growth and 5mb. After about 20 minutes they started getting "Transaction log is full" so I had them up the transaction log to 50mb. After about another 20 mins everyone on the network started getting errors saying that data cannot be added because the PRIMARY file group was full ("Could not allocate space for object TrussLumber in Database ATP because the primary file group is full").
There is plenty of disk space so I know thats not the problem. And they can add data to other databases, so I cant imagine what could possibly be causing this....Can anyone help me out?