SQL Server Admin 2014 :: Identify Table Made On Temporary Basis Are Not Used?
Sep 9, 2015
In the Operating environment databases, may be made tables in the database on a temporary basis but they are still yet and they are not removed, how to identify tables that have been made on a temporary basis are not used (don’t have any read & write records)?
In the Operating environment databases, may be made tables in the database on a temporary basis but they are still yet and they are not removed, how to identify tables that have been made on a temporary basis are not used (don’t have any read & write records)?
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
IF Object_id('GoldenSecurity') IS NOT NULL DROP TABLE dbo.GoldenSecurity; IF Object_id('GoldenSecurityRowVersion') IS NOT NULL DROP TABLE dbo.GoldenSecurityRowVersion;
declare @str varchar(2000) set @str = 'SELECT * INTO #TmpTable FROM FormHistory' exec (@str) SELECT * FROM #TmpTable
gives the following error:
Invalid object name '#TmpTable'.
This is a very cutdown version of what I am trying to achieve so it might not seem obvious why I am writing it into a string and using exec but in the real code I do need to do this. I have cut it right back to try to get to the bottom of why this doesn't work. I suspect the # in the string is causing the problems.
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.
I'm being asked to create multiple filegroups for a new database based on the table type, transaction, lookup, misc... From what i'm reading this doesn't make sense. I'm reading either large tables get file groups, nonclustered indexes when they are about the same size of the data, or a few other reasons...
First of all, we are talking about the same disk (please don't ask me about how it is configured) and I'm not sure yet if restoring separate file groups is even going to be necessary.
So here are my questions (beyond, the test and see what happens) because in the end I'm going to probably have to do what i'm told. So this is for my professional knowledge.
1. Does file groups separated by table type make sense? 2. Should you put tables that are queried often together in the same or different file groups. 3. I'm pretty sure you can't restore single file group for write access, am I correct?
I want the count of orders of a particular table on weekly basis i.e if date given to me is 10/3/2014 then my output should be count of orders from date 10/3/2014 to 09/3/2014(one week) then count of orders from 2/3/2014 to 08/3/2014(another week) and then from 24/2/2014 to 01/3/2014(another week).....
I am trying to replicate data from a view in the publisher to a table in the subscriber (transaction replication). I do not need the view's base table, or the view itself, replicated to the subscriber. I only want to data from the view to feed a table in the subscriber.
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
With all the new functionality, can 2014 now restore a single table from a standard backup without using any third party tools? I have looked, but can't see this listed as a feature (though that doesn't mean it's not there, maybe I've just missed it).
I have a master table with after insert trigger on it.. When record is inserted into master table, the trigger fires and is captured in the backoffice table. In case the trigger fails, my record is neither in the master table nor in the back office table..
Is there anyway to capture the record either in the master table or in a separate table.
I'm trying to find out what tables are being used in a Database.
I don't want the last User but the User and the Dates.
I have a script that return the last user but that is not going to work.
The following script returns the last user but not all users and the Login Name:
ITH LastActivity (ObjectID, LastAction) AS ( SELECT object_id AS TableName, last_user_seek as LastAction FROM sys.dm_db_index_usage_stats u WHERE database_id = db_id(db_name())
I have a job scheduled that imports a table from a Oracle database. The job runs at 3am and reports success. But for some reason when i query the table to see how many records there are, I see the same row count as the day before (it should increase everyday- student enrollment). When i execute the package manually, the table updates fine.
The error is "The table with Name of 'Table Name' does not exist.An error occurred when loading the Model."
We get this error when we try to check the properties of an analysis server using SQL Server Management studio. We have resolved this issue twice by Stopping the SQL Server analysis service,removing db folders from Analysis Server Data folder and starting the services back on. The db folder that we removed was advised by the BI team.
Is there any way to enforce table references in stored procedures? For Example, we have stored procedures with a ton of different formats, "dbo.table", "table", "db.dbo.table", etc. Can we make it so that for every stored procedure, the reference must be at least "dbo.table"?
I am setting up extended events more or less just fine, however I am a bit confused as to how to read and load them into a table for querying. In particular the offset part - is there a way to load just a given dates worth in?
I've got the files configured to be 20MB before rolling over, the XE is running all the time.
So if i load in the full file now, say that covers 2.5 days worth, when I load it again tomorrow to get the updated data I'm also reloading today, which is a waste?
I presume I am going about this wrong, but lack an example that really goes into detail of practicalities of loading this data.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?