SQL 2012 :: How To Remove Records From Repository / Alerts
Feb 18, 2014
We configured IDERA SQL Safe for backups and restores.we setup an email for notifications. One day we performed manual backup operation for 150 databases from SQL Safe tool. Unnoticed it backed up to C:Backup folder.
We got alerts with the report of backups on C: drive. Then we moved backup files to respective folders.But, I could not clear the records from the report, its been 25 days, still we are receiving the alerts as below.How can I clear this report or to configure or setup anything to avoid this, in future as well.
Below is the sample records from the report. I need this report to be cleared.
Subject: SQL Safe Validation Report
The following files are recorded in the SQLSafe repository, but no longer exist in their locations:
C:BackupLUXOR_DB_GroupEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_InsightEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_IMEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_SiteEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_SiEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_ICEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_vdbEPF_Diff_201402050045 (1 of 1).safe
C:BackupLUXOR_DB_M_SEPF_Diff_201402050045 (1 of 1).safe
I have a SQL 2012 database that has 10 tables. One of the tables is populated by manual import from CSV file. Each time a user calls custom ASP.NET code., records get inserted into a table called forecast_data with incremental increase in FileID. So first import has FileID of 1, second import has FileID of 2 etc.
What I am trying to do is only keep the data that has the highest FileID (MAX(FileID). I would like to write a store procedure that removes all older data once a new import is written into the table.
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
EDITION PRODUCT INSERTDATE ---------- ------------ ---------------------- CNE TN-Town News 12/19/2007 12:00:00 AM TN TN-Town News 12/19/2007 12:00:00 AM
What i have to do is if there are multiple records for one product in any day, then i need to remove all those records. In this case i am getting two records for the PRODUCT 'TN-Town News' and for INSERTDATE = 12/19/2007 . So i need to remove these two records from the table.
Is there some Transformation or other method to remove outlying records based on an attribute during a Data Flow task?
I have a list of Organizations complete with a list of Products they have bought. I am going to do some data mining / profiling off of this data but first I need to get rid of the top 25% and bottom 25% quantity records by Product. I've looked at Percentage / Row Sampling but they are too simple.
We imported approximately 2.9 million records from our mainframe server into our SQL Server but have run into a problem. The data in a few of the fields contains both leading and trailing spaces. An example of the data would be like this, using periods to represent spaces:
What we have:
..1A02938.....
What we need:
1A02938 (no spaces)
Is there some sort of algorithm I can run on the data to remove those spaces? The problem is coming up when trying to perform a SELECT query. We try something like:
SELECT * FROM PCPIPT0 WHERE PANO20 = "1A02938" but we get zero results because of the spaces in the database. The datatype of the filed is char(20) because we need some flexibility on the size of the data stored.
I currently have one table that lists all projects and tasks within the organisation. One of the table fields is the task status, open or closed. I would like to be able to have a process by which the tasks that are completed are removed from the table and placed into another (archive) table. The same records then being removed from the original table. which then only contains the incomplete tasks. This process could be run at given times during the day or at the point when the status of a task is changed from open to closed, either way each time the process is run it would need to append the rows removed into the archive table. Anyone any ideas on the best way to do this?.
I have a table that "Geography" thatĀ has the following columns: city, state, zip
There are tons of duplicate cities in this table.Ā I ran this query and it shows me the number of occurrences of each city.Ā I want to delete all the duplicates except for 1.Ā I don't want to do this manually as there are a lot of records.
What would the SQL look like to delete the duplicate records but keep at least one?
We all were new at one point.... any help is appreciated.
Objective:
Combining two 49,000 row tables and remove records where there is only 1 column difference. (keeping the specified column value removing the one with a blank.)
Reason:
I have 2 people going through a list, coding a specific column with a single letter value. They both have different progress on each sheet. Hence I am trying to UNION them and have a result of their combined efforts without duplicates.
My progress/where I'm stuck:
Here is my first query/union:
SELECT * FROM [Eds table] UNION SELECT * FROM [Vickis table];
As shown above, I have unioned these 2 tables and my results removed th obvious whole record duplicates, but since 1 column is different on these, a union without criteria considers them unique.....
an example of duplicates that I must remove are as follows:
I have a requirement where i want to delete the records based on the Date column. I have table which contain the columns like machinename ,lasthardwarescandate
I want to delete the records based on the max(Lasthardwarescandate) i.e. latest one, column where the machine name is duplicate menace it repeats. So how would i remove the duplicate machine names based on the Lasthardwarescandate column(There are multiple entries for the Lasthardwarescandate so i want to fetch the latest date column).
Note: Duplication should be removed based on āLast Hardware Scanā date.
Only latest date should be considered from multiple records for the same system. "
I created a mirrored DB, added a new datafile to the principal using a path the mirror can't access. As a result the mirroring session was suspended because the mirror stated: "CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file".
Fine, so I disabled mirroring on the principal (SET PARTNER OFF) which worked just fine, but the mirror stayed in a mirrored state, Object Explorer saying:"(Mirror, Disconnected / In Recovery). Sooo, I tried to disable db mirroring on the mirror using ALTER DATABASE db SET PARTNER OFF;, which completed successfully, but the DB STILL remained in a mirrored-configuration.
I tried ALTER DATABASE db SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS which resulted in
"Msg 1404, Level 16, State 4, Line 1
The command failed because the database mirror is busy. Reissue the command later."
I tried taking the DB offline or restoring it from a backup, but all these operations resulted in "The operation cannot be performed on database "db" because it is involved in a database mirroring session or an availability group."
The only solution I can think of at the moment is shutting down the instance and deleting the data and log-files of the mirrored db, which would be just fine because this is just a test installation, but it would be not quite as easy in a productive environment.
Is there any other way to remove the mirrored state from a disconnected mirror or to simply get rid of the db entirely to perform a recovery?
I wanted to remove duplicate records from SSRS report. I set the "Hide Duplicates" to True. It is now working, But i am getting the space between the two records, which i want to get rid of. How to get rid of extra spaces between two records ( Please find the details below).
Error:- (1 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator. Msg 5042, Level 16, State 1, Line 1
The file 'tempdev1' cannot be removed because it is not empty.
Note: =>I restarted SQLServer from SSMS and then ran same commands mentioned above ,......and getting same error... => I executed above commands and restarted services...no change...
I The requirement is to unload all columns data into csv file using bcp with pipe delimiter, but the condition is to remove milliseconds part of a datetime column.
Ex: 2014-02-19 17:12:14.967 remove .967 from data while unloading into csv.
I am trying to remove the dates from a query. my goal is to load it in ssas and add a time dimension. Right now i have to change the dates evrytime i run reports (monthly). Here is the query
drop table #tmptmp
SELECT *, (DATEDIFF(day, enrollmentsDate, ShipmentDate)) - ((DATEDIFF(WEEK, enrollmentsenttDate, InitialShipmentDate) * 2) +(CASE WHEN DATENAME(DW, enrollmentsentDate) = 'Sunday' THEN 1 ELSE 0 END) +(CASE WHEN DATENAME(DW, ShipmentDate) = 'Saturday' THEN 1 ELSE 0 END)
I am in plan to implement following for backup of one of our database Enable Full recovery mode
1- Create full backup nightly 2- Create transaction log backup after every 25 min
as I am taking full backup every night, I think I can remove transaction log file backups at the time of full backup, as we can apply transaction log backup over full backup.My question is regarding removal of transaction log backups.
-Should I remove all transaction log backups and then execute full backup? -Should I execute full backup and remove all transaction log backup older than 24Hrs ? -Do I have to consider SCN or related info before deleting any transaction log backup ?
Windows 2012 R2, SQL 2012 (Primary Replica) SQL 2012 (Seondary Replica) SQL 2012 (Secondary Replica over WAN site)
ThereĀ are database replicating on three SQL servers. WAN line is having performance issue because of limited bandwidth I have to remove SQL secondary replica over WAN site temporarily and add it again later when the WAN line is upgraded with between bandwidth What is the best practice to remove secondary replica and replicating database and add later from SQL management studio without interruptionsĀ on databases?
I have found some articles with no publication in our transactional replication.
For example, running this:
select p.publication, a.publication_id, a.article from dbo.MSArticles as a left outer join dbo.MSpublications as p on a.publication_id = p.publication_id
shows this:
NULL1org_Community NULL3org_Community Purchasing to EDW5org_Community NULL1org_Division NULL3org_Division Purchasing to EDW5org_Division
How can I get rid of the articles that are not part of a publication?
I can't use sp_droparticle because it requires a publication which these articles do not have.
I am creating a report query that returns all unreconciled P/O lines. I am near completion but I am unable to find a way to remove the reconciled records.
I have included a script to produce some sample table, data & query.
The recordset dispalys 6 rows. All reconciled Supplier Invoices are duplicated and have transaction codes 40, 50 and reconcile code of 9 (5024, 921689471).
All unreconciled only appear once and have transaction codes 40 and reconcile code of 0 (4835 & 921978016). These are the only records that I want to show.
I have a table that is riddled with weird characters. So far I have found an escape character for PDF files and a trademark sign. These characters are crashing my SSIS packages. I am able to remove these characters with an update script...
Update TABLE set LEAD_NOTES__C = Replace(LEAD_NOTES__C, nchar(65533) COLLATE Latin1_General_BIN2, '!');
Update TABLE set LEAD_NOTES__C = Replace(LEAD_NOTES__C, nchar(1671) COLLATE Latin1_General_BIN2, '!');
This works fine, but my question is...
I would like to write a script that removes all foreign characters with the exception of the normal characters like (@,#,$,%,etc). I need a dynamic process that handles this so I am not losing time sifting through over 20,000 rows of data and changing my update script to remove a specific column. Although this method works, I would prefer a dynamic query. I intend to wrap this in a stored procedure that loops through all columns in a table (as parameter).