How To Prevent Users From Logging To DB During Its Maintenance
Jun 11, 2004
I am really a DB admin beginner, so hopefully it'll be easy for you to answer my simple question:
How can I temporarily prevent users (except "sa") from logging to database(s) or better to DB Server while there are applied scripts on them ?
I have already tried TCP/IP port changing, hiding server, running a single user mode but the reult was very poor. Please note that "sa" has to be able to connect to DB Server from a third party application but other users not.
Any idea and/or a known solution ?
Thanx in advance for your hints
I have SQL2000 server installed and then SQL management studio 2008 R2 express edition in users desktop. I noticed that the user can stop the SQL server agent. Even a user is a reader access only.
How do i prevent other users from changing the data of my tables? Means one can change data using only my login rest others cannot even DBA or also from server administrator
I am running a SQL maintenance job on a 40 GB database which performsoptimizations by re-orginizing data and indexes pages. After the jobis finished, a separate job peforming a SQL transaction log backup isrun on the same database, which produces a 30 GB transaction log backupfile. Is there any way to turn off logging during the maintenanceplan, so that when the transaction log backup occurrs it will notproduce a large backup file?
Based on our database infrastructure, we need to secure our SQL databases. The security issue concerns on allowing a limited number of Domain Admin users to access the SQL databases. We tried certain ways, based on the documents in the Microsoft web site, but we couldn€™t reach to the point of preventing the Domain Admin users accessing the SQL databases.
I'm developing an application in ASP.Net that uses SQLServer. The database person in the team wants to have each user have a seperate login to the database so he can tell who has done what changes in the database. Is this a good practice? Or are their performancesecurity issues with this model?
I am developing a package on my local workstation. I have defined two logging service providers. One is for SQL Server and the other is for the Windows Event Log. I am using the Dts.Log method in a script task to write log entries.
Logging is working properly with the SQL Server provider and rows are being inserted into the sysdtslog90 table. However, the only events that are being logged in the Windows Event Log are the package start and end events which I believe SSIS is doing automatically anyway.
Is there something I need to do to enable WIndows Event Log logging other than defining a log provider and making sure it is checked active? Won't SSIS write to two different logs with one Dts.Log call? Any ideas on what might be going wrong with my approach?
Hi, I decided to use the SQL Server log provider to store logging data of all my Integration Services packages. I also created some reports about this data for operating purposes. I have a problem occurs the name of the executing package is not always written to the log,but the name of the single task which failed. But that is not very useful information for operating, because I do not see any chance to get the name of the package by the information which is logged in the sysdtslog90 table in the database which I defined for SSIS Logging.
How do I configure the package to always log the package information into the table, too?
I recently read the project real ETL design best practices whitepaper. I too, want to do custom logging as I do today, and also use SSIS logging. The paper recommended using the variable system::PackageExecutionId to tie the 2 logging methods together.
I have a question that I hope someone can clear up for me. I have come across a number of different suggestions on DB maintenance, for example reindexing with the following script:
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN DBCC DBREINDEX(@TableName,' ',90) FETCH NEXT FROM TableCursor INTO @TableName END
CLOSE TableCursor
DEALLOCATE TableCursor
My question is, doesn't the maintenance plan have this functionality inherent in it when you create the maintenance jobs to reindex? Is there a benefit to scripting things out vs just using the maintenance plan wizard for this sort of thing and any of the items it covers? I came from an Oracle background where this was a no-brainer but I am a bit confused on the choices with SQL Server.
We have an existing SSRS server, and have just created a new child domain. We'll be migrating users from the parent to the child, and want to add the users of that new domain with access to SSRS. In the parent domain they are able to access, but after migration with the child domain account, they cannot.
I have added the group CHILDDomain Users with a system user role on SSRS, and PARENTDomain Users was already there.
Is there any additional step I should/could take to get this active?
I have had this issue just pop up. I have local users who can connect fine, but my users that require connection by VPN cannot connect. I get the server not available or access denied error. I did confirm that the VPN'ers are connected to the network correctly and can see that their shares and mappings are correct. Any ideas? Thanking you all in advance!!
I am trying to revert back to Windows 7 after upgrading to Windows 10, however it will not let me and the following message occurs: "Remove new accounts.Before you can go back to a previous version of Windows, you'll need to remove any user accounts you added after the most recent upgrade. The accounts need to be completely removed, including their profiles.You created one account (NT SERVICEMSSQLSERVER) Go to Settings> Accounts> Other users to remove these accounts and then try again".However I did not create any new users and there are no other users listed in the Accounts section.
hi alli've got two tables called "webusers" (id, name, fk_country) and "countries" (id, name) at the meantime, i've a search-page where i can fill a form to search users. in the dropdown to select the country i included an option which is called "all countries". now the problem is: how can i make a stored procedure that makes a restriction to the fk_country depending on the submitted fk_country parameter?it should be something like SELECT * FROM webusers(if @fk_country > 0, which is the value for "all countries"){ WHERE fk_country = @fk_country} who has an idea how to solve this problem?
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
I am building my first ASP.Net app from scratch and while working on the DAL I came across the problem of SQL Injection. I searched on the web and read different articles but I am still unsure about the answer. My question is should I add
Add in Parameters to my C# code to avoid SQL Injection. What is the best practice. I am unclear if the stored procedure already helps me avoid SQl Injection or if I need the add in parameters in the C# methods to make it work. I need some help. Thanks, Newbie
My C# update method in the DAL (still working on the code)
private static bool Update(AvatarImageInfo avatarImage) { //Invoke a SQL command and return true if the update was successful. db.ExecuteNonQuery("syl_AvatarImageUpdate", avatarImage.AvatarImageID, avatarImage.DateAdded, avatarImage.ImageName, avatarImage.ImagePath, avatarImage.IsApproved);
return true; }
I am using stored procedures to access the data in the database.
My update stored proc
set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO
ALTER PROCEDURE [dbo].[syl_AvatarImageUpdate] @AvatarImageID int, @DateAdded datetime, @ImageName nvarchar(64), @ImagePath nvarchar(64), @IsApproved bit AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRY UPDATE [syl_AvatarImages] SET [DateAdded] = @DateAdded, [ImageName] = @ImageName, [ImagePath] = @ImagePath, [IsApproved] = @IsApproved WHERE [AvatarImageID] = @AvatarImageID RETURN END TRY BEGIN CATCH --Execute LogError SP EXECUTE [dbo].[syl_LogError]; --Being in a Catch Block indicates failure. --Force RETURN to -1 for consistency (other return values are generated, such as -6). RETURN -1 END CATCH END
Have a job that calls a DTS package, DTS is an Export & Import wizard to copy tables. Someone deleted a table from source and my job failed last night. Inputs appreaciated.
I'm going thru my application log, and just seeing what errors are popping up. I have a relatively intense search feature, thats causing alot of deadlocks.
Exception type: SqlException Exception message: Transaction (Process ID 105) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
In general, what's the best way to resolve this ?
Should I see if I can apply "WITH (NOLOCK)" to my data ?
I want to try and protect myself from my own stupidity. I have a number of sql databases, but one is LIVE. It is easy to drop tables but I want to set something (e.g. a password) which will help prevent me from dropping tables on the live database.
Hi,I'm using SQL Server 2000 MSDE on a laptop running Windows XP.I have a couple of SP's that that quite some time to compile. So I waswondering: is there any way to have the database *not* recompile them everytime after a reboot?BOL says: "As a database is changed by such actions as adding indexes orchanging data in indexed columns, the original query plans used to accessits tables should be optimized again by recompiling them. This optimizationhappens automatically the first time a stored procedure is run afterMicrosoft® SQL ServerT 2000 is restarted."Now the SQL Server is restarted a lot, because laptops don't have endlessbatteries <g>Cheers,Bas
Hello I noticed a spelling mistake in the data in a column of several tables, I used the following syntax to alter the spelling: UPDATE [dbo].[Prod_Cat] SET [ProdName]=N'merseyside' WHERE ProdName = 'mmserseyside' The above code correctly updated the spelling error, but it also inserted a new row with the corrected data. So I found myself with two Identical rows containing the corrected information. I had to manually delete the extra row. Because if I had put in a DELETE statement, I would have then lost both rows. What do I need to do to prevent this happening next time. As I find that I need to update the names of some products, but I don't want to duplicate them. Thanks Lynn
this is a question I put in the sql community in microsoft, but havent be answered in full
------------
I am using dynamic sql to do a query with differents 'order' sentences and/or 'where' sentences depending on a variable I pass to the sp
ex:
create proc ex @orden varchar(100) @criterio varchar(100)
as declare consulta varchar(4000)
set consulta=N'select pais from paises where '+@criterio' order by '+@orden
------------
I'd like to know it it uses 2 sp in the cache, as I read, the main sp and the query inside the variable of the dynamic sql. if so, as I imagine, then I suppose I have to do the main sp without any 'if' sentence to be the same sp, and so taking it from the cache and not recompile the sp
now, I have various 'if' sentences in the main sp (the caller of the dynamic sql) but I plan to remove them and do the 'if' by program -it is in asp.net-, so I suppose it is better because in this way the main sp is took from the cache, supposing this uses the cache different that the dynamic sql in the variable
what do u think? does the dynamic sql use 2 caches? if so, u think it is better to try to do the main sp same in all uses (no 'if' statements)?
-----
They told me this coding is not good (dynamic sql) because it can give control to the user?
I ask, how does it give control to use? what ar sql injection attack and how to prevent them?
I use dynamis sql because I have 150 queries to do, and thought dynamic sql is good
is it true that dynamic sql have to be recompiled in each execution? I suppose so only if the sql variable is different, right?
On my site I have a simple textbox which is a keyword search, people type a keyword and then that looks in 3 colums of an SQL database and returns any matches
The code is basic i.e. SELECT * FROM Table WHERE Column1 LIKE %searcg%
There is no validation of what goes into the text box and I am worried about SQL injection, what can I do to minimize the risk
I have just tried the site and put in two single quotes as the search term, this crashed the script so I know I am vunerable.
Can anyone help, perhaps point me in the direction of furthur resources on the subject?
I want to be able to read and update a value in the database without entering a race condition. For example: User #1 reads a row from the database, changes a value then writes the value back. User #2 reads the same row AFTER user #1 has read it, but BEFORE user #1 writes it back. User #2 then changes the value and writes it back, overwriting the value that user #1 wrote. I thought I could do this with transactions, but it just makes user #2 wait until user #1 is done writing before user #2 can write. It doesn't stop user #2 from reading while user #1 has it out. Does that make sense?
Is there a way to monitor all ODBC connections to a MSSQL server and prevent a username/ODBC combination. My problem is that we have many frontends for viewing reports, but we managed them all and users are not allow to make their own connections. Some users now uses MSAccess over ODBC to draw their own reports - they have all the permissions as neededd by the other apps.
I'm writing a trigger to prevent duplicates. I know that this can be done through primary key or unique constraints but in the real world my uniqueness is defined by 8 columns which is too a big an index to maintain on the primary / unique key.
CREATE TRIGGER trigger_Check_Duplicates ON Table1 FOR INSERT, UPDATE AS -- This trigger has been created to check that duplicate rows are not inserted into AudioVisual table. DECLARE @IsDuplicate INTEGER -- Check if row exists SELECT @IsDuplicate = 1 FROM Inserted i, Table1 t WHERE t.Centre = i.Centre AND t.Month = i.Month IF (@IsDuplicate = 1) -- Display Error and then Rollback transaction BEGIN RAISERROR ('This row already exists in the table', 16, 1) ROLLBACK TRANSACTION END
Then insert a row into the new table (no other data is in there)
INSERT Table1 VALUES('0691040176','AUG')
I get the Trigger error message that the row already exists. Why is this the case? I though that Table 1 (target table) would show no entries as it has no data - it should be a before image of the table and the inserted table should be an after image.