SQL Server Admin 2014 :: List Of Users That Accessed A Table
Jul 22, 2015
I'm trying to find out what tables are being used in a Database.
I don't want the last User but the User and the Dates.
I have a script that return the last user but that is not going to work.
The following script returns the last user but not all users and the Login Name:
ITH LastActivity (ObjectID, LastAction) AS
(
SELECT object_id AS TableName,
last_user_seek as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
I am quite new to SSIS but managed to build a package which imports text files in to SQL. The text files are generated after users complete a manufacturing process on a machine.
The SSIS package is stored in the SSIS catalog and currently a SQL Agent tasks runs every evening to import new files that have been created during the day. Users have now requested the ability to run the import process as soon as they have finished their manufacturing runs as they may want to query the data to looks up stats etc.
What is the best way to do this considering all of the users are not SQL guys and wont have direct logins into the SQL Server or access to SQL Server Management studio. They will have access to the PC where the files are generated, so I ideally I need a batch file which they can just execute to import their new files.
I have seen lots of things on the web about running dtsexec but as the package is stored in the SSIS Catalog, how can I execute this remotely?
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I'm looking for a way to list the target servers associated with a master server. The reason is that we're moving to another master server, and I'd prefer not to move the targets manually.
I've got most of the T-SQL already (sp_msx_enlist, sp_add_jobserver), but I'd like a scripted solution instead of a wizard.
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.
I'm being asked to create multiple filegroups for a new database based on the table type, transaction, lookup, misc... From what i'm reading this doesn't make sense. I'm reading either large tables get file groups, nonclustered indexes when they are about the same size of the data, or a few other reasons...
First of all, we are talking about the same disk (please don't ask me about how it is configured) and I'm not sure yet if restoring separate file groups is even going to be necessary.
So here are my questions (beyond, the test and see what happens) because in the end I'm going to probably have to do what i'm told. So this is for my professional knowledge.
1. Does file groups separated by table type make sense? 2. Should you put tables that are queried often together in the same or different file groups. 3. I'm pretty sure you can't restore single file group for write access, am I correct?
I am trying to replicate data from a view in the publisher to a table in the subscriber (transaction replication). I do not need the view's base table, or the view itself, replicated to the subscriber. I only want to data from the view to feed a table in the subscriber.
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
With all the new functionality, can 2014 now restore a single table from a standard backup without using any third party tools? I have looked, but can't see this listed as a feature (though that doesn't mean it's not there, maybe I've just missed it).
I have a master table with after insert trigger on it.. When record is inserted into master table, the trigger fires and is captured in the backoffice table. In case the trigger fails, my record is neither in the master table nor in the back office table..
Is there anyway to capture the record either in the master table or in a separate table.
I have a job scheduled that imports a table from a Oracle database. The job runs at 3am and reports success. But for some reason when i query the table to see how many records there are, I see the same row count as the day before (it should increase everyday- student enrollment). When i execute the package manually, the table updates fine.
In the Operating environment databases, may be made tables in the database on a temporary basis but they are still yet and they are not removed, how to identify tables that have been made on a temporary basis are not used (don’t have any read & write records)?
The error is "The table with Name of 'Table Name' does not exist.An error occurred when loading the Model."
We get this error when we try to check the properties of an analysis server using SQL Server Management studio. We have resolved this issue twice by Stopping the SQL Server analysis service,removing db folders from Analysis Server Data folder and starting the services back on. The db folder that we removed was advised by the BI team.
Is there any way to enforce table references in stored procedures? For Example, we have stored procedures with a ton of different formats, "dbo.table", "table", "db.dbo.table", etc. Can we make it so that for every stored procedure, the reference must be at least "dbo.table"?
I am setting up extended events more or less just fine, however I am a bit confused as to how to read and load them into a table for querying. In particular the offset part - is there a way to load just a given dates worth in?
I've got the files configured to be 20MB before rolling over, the XE is running all the time.
So if i load in the full file now, say that covers 2.5 days worth, when I load it again tomorrow to get the updated data I'm also reloading today, which is a waste?
I presume I am going about this wrong, but lack an example that really goes into detail of practicalities of loading this data.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?