I wrote code to archive Data.I created a table that set the maximum number of records to archive and delete and loop until finished or a set a flag in a table to stop archiving. I was able to limit the number of records to commit.
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I'm looking for a process to archive data through replication. I have nightly job that purge records in few source tables(publisher) retaining only 3 yrs data. I have archive database (subscriber) that contains prior 3 yrs data and current 3 yrs data.
Before nightly job DELETES records in Source table i want to STOP replication so that the delete is not replicated in archive database. After the job completes i would like to TURN ON replication so that any new inserts and updates in Source will ONLY be replicated in archive database.
My DBA tested this but after last step of turning replication back ON archive database is sync'd with source table.
There are around 70 tables where 30 of them are transactional tables that needs record purge. Developing ETL process is possible but tedious.
Project assigned to me called ''Archive off old data''.As SQL DBA the best way to archive large table is data Partitioning. But one table does not enough in terms of size to do partition and plus table has 32 other tables,view,sps dependencies.The data from that table needs to archive off every week to different database on different server. My question are
1) what is the best way to archive data ? 2) what are steps I need to follow ? Do I need to remove dependencies first and then take out data?.
We are running SQL Server 2005 express on Windows 2003. The database server gets significant amounts of data.
Because of the 4GB data limit we have a daily cron task which goes through and deletes data older then 90 days.
We would like a way to archive this data instead of deleting it. Is there any way to take data and compress it and store it in a different way, so that if needed, customers can query directly out from the compressed data? Cleary querying from compressed would be slower but that is ok.
Any other solutions that would allow us to archive data instead of deleting it? Thanks.
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
I have a table contains huge rows of data. Performance issue raised. I amthinking archive some data so that the table will not be that big. The mostconvience way is move it to another table. The problem is: will this solvemy performance problem? or I need to move it to another database to reducethe database size?Regards,TrueNo
The purchased-application mssql 6.5, sp 4 that I am working on has one large table has 13m illion. It the largest table considering thenextlatgest table is only1.75 million rows.
Thew vnedor has made a change to this largest table in recommending changing a data type -- char to varchar. To make this change easier to do, I want to "archive" older data not necessary for the current year or current processing to another table.
What is the best way to do this archiving?
Any information you can provide will be greatly appreciated. Thanks.
I have two tables say A and Archive. After a certain period of time some records are to be sent to archive table.To copy records to archive table I am using SqlBulkCopy operations.Now I have to delete the records from A Table. I was thinking of sending a Comma seperated id's of rows that are to be deleted to a stored procedure.Are there any better techniques to move data to archive table and to remove data from main table.?Thanks.
I have 10k indexes I need to rebuild and each time the script reaches an error it stops all further activity. How can I append 'GO' to the end of each line so it will continue on error messages?
Once I have the syntax I can do a find and replace function in Notepad++
USE [AdventureWorks2014] + char(13) + char(10) + GO ALTER INDEX [IX_Person] ON [Person].[Person] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF) + char(13) + char(10) + GO ALTER INDEX [IX_Emp] ON [HumanResources].[Employee] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF) + char(13) + char(10) + GO ************** Truncate ***********
I Want to monitor Replication count of object (Table )if it is not equal to Publication (Table ) and subscriber (Table ), It have to send mail with count difference.
Any better way to query SQL 2012 to display the code of a stored proc to a single line. I'm trying to write a script to insert the contents of the procs between my devestprod environments. So people can query a single table for any proc that is different between environments. At the moment I am using the syscomments view and the text column but the problem here is if you get a lengthy proc it cuts it up into multiple rows.
I can get around it by converting the text to a varchar(max) and outer joining the query, but as you can see by my code below I have to try and guess what the maximum number of rows I'm going to get back for my largest proc. If someone adds a new one that returns 8 rows I'm going to miss it with this query.
I am wondering if it is possible to archive records out of a SQL Server Database. The records would then be removed from the original database. This process should be able to be run multiple times (with user interface).
It takes me a long time to view in EM under Mangement/SQL Server Logs/current file. Is there a way I can shorten that file so it won't take so long to view the current file?
I need a script that can be schedule for a SQL7 server to archive a table-space's data for exery day except current, then pull the last 7 days worth of data back into the table. Any one have anything like this? My dbase is a load totaling dbase and the size is getting out of control.
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
My web project (ASP.NET 2.0 / C#) runs against sql server 2000 and uses the System.Data.SqlClient.using System.Data.SqlClient; I use System.Data.SqlClient.SqlConnection and System.Data.SqlClient.SqlCommand to make the connections to the database and do selects and updates. Is it correct to continue to use these against SQL Server 2005? I ask because I made a connection string (outside of .Net) for SqlServer 2005 using the SQL native provider and it had the following - Provider=SQLNCLI.1 and any connection strings I had made (also outside of ASP.NET) fro SQL Server all used Provider=SQLOLEDB.1. This is why I wondered if there is a different SqlClient in .Net 2.0 for SQL Server 2005? Cheers Al
I think I am definitely thrashing and am not getting anywhere on something I think should be pretty simple to accomplish: I need to pull the total amounts for compartments with different products which are under the same manifest and the same document number conditionally based on if the document types are "Starting" or "Ending" but the values come from the "Adjust" records.
So here is the DDL, sample data, and the ideal return rows
CREATE TABLE #InvLogData ( Id BIGINT, --is actually an identity column Manifest_Id BIGINT, Doc_Num BIGINT, Doc_Type CHAR(1), -- S = Starting, E = Ending, A = Adjust Compart_Id TINYINT,
[Code] ....
I have tried a combination of the below statements but I keep coming back to not being able to actually grab the correct rows.
SELECT DISTINCT(column X) FROM #InvLogData GROUP BY X HAVING COUNT(DISTINCT X) > 1
One further minor problem: I need to make this a set-based solution. This table grows by a couple hundred thousand rows a week, a co-worker suggested using a <shudder/> cursor to do the work but it would never be performant.
Of all the Visual Basic.NET data access books that I have purchased and all the Internet site example code that I have reviewed, none have had any good examples of multi-threaded VB.NET code doing data access.
I am trying to avoid the non-responsiveness in a VB app while a simple data retrieval from SQL Server 2005 is in progress.
If anyone knows of any book titles or web sites that have example code (good or not) of multi-threaded VB.NET applications doing data access against Microsoft SQL Server (7, 2000, or 2005) or even against Microsoft Access(TM), it would be very much appreciated if you could provide the book title or URL to point me in the right direction.
Hi all--I'm trying to convert a function which I inherited from a SQL Server 2000 DTS package to something usable in an SSIS package in SQL Server 2005. Given the original code here: Function Main() on error resume next dim cn, i, rs, sSQL Set cn = CreateObject("ADODB.Connection") cn.Open "Provider=sqloledb;Server=<server_name>;Database=<db_name>;User ID=<sysadmin_user>;Password=<password>" set rs = CreateObject("ADODB.Recordset") set rs = DTSGlobalVariables("SQLstring").value
for i = 1 to rs.RecordCount sSQL = rs.Fields(0).value cn.Execute sSQL, , 128 'adExecuteNoRecords option for faster execution rs.MoveNext Next
Main = DTSTaskExecResult_Success
End Function
This code was originally programmed in the SQL Server ActiveX Task type in a DTS package designed to take an open-ended number of SQL statements generated by another task as input, then execute each SQL statement sequentially. Upon this code's success, move on to the next step. (Of course, there was no additional documentation with this code. :-)
Based on other postings, I attempted to push this code into a Visual Studio BI 2005 Script Task with the following change:
public Sub Main()
...
Dts.TaskResult = Dts.Results.Success
End Class
I get the following error when I attempt to compile this:
Error 30209: Option Strict On requires all variable declarations to have an 'As' clause.
I am new to Visual Basic, so I'm on a learning curve here. From what I know of this script: - The variables here violate the new Option Strict On requirement in VS 2005 to declare what type of object your variable is supposed to use.
- I need to explicitly declare each object, unless I turn off the Option Strict On (which didn't seem recommended, based on what I read).
Given this statement:
dim cn, i, rs, sSQL
I'm looking at "i" as type Integer; rs and sSQL are open-ended arrays, but can't quite figure out how to read the code here:
This code seems to create an instance of a COM component, then pass provider information and create the recordset being passed in by the previous task, but am not sure whether this syntax is correct for VS 2005 or what data type declaration to make here. Any ideas/help on how to rewrite this code would be greatly appreciated!
I want to write a query where I can see all races and age range as column.
TblRace
ID, RaceName
TblAgeRange
ID,AgeRange.
There is no connection between this two table. I need to display result like below.
Race 17-20 21-30 31-40
A
B
I
W
How do i get this kind of empty data set so that I can fill it out in front end or any better solution. The age range will be displayed as many row as they have. It's not static. Above is just an example.
So I'm using the 2012 SSMS to connect to a SQL 2008 database, upon which we have code that audits all SQL logins to production and notifies us via email that someone is using logins they shouldn't. Mostly it's to notify us if people other than the DBAs are using these logins from their desktops instead of using their windows accounts.
This morning I opened up SSMS 2012, logged into my production servers in Object Explorer using Windows Auth. An hour later, I had to update an exception table, so I opened a new query with SQL Auth and used a SQL-only login it. Immediately the email pops that someone on my desktop is using a SQL Login. That's okay. That's expected.
What I didn't expect is that after I closed the query window, the email kept popping. The query window isn't even open / connected to production any more, but SQL still thinks I'm logged in using a SQL Login instead of with Windows Authentication (which is what I used on the Object Explorer connection).
Is there a bug with 2012 that causes a new connection type to affect all current connections? I.E., did it change my DB Windows Auth connection to SQL Auth on the other connection?
was a result of running DBCC DBREINDEX.I wasn't able to replicate this when I did a trace on my local machine, but SQL Monitor from Red-Gate shows that query happening for at least some of the tables that are being reindexed via a job. Is there anyway to examine the actual code inside DBREINDEX to see exactly what commands it may execute?
We made SSIS package in dev environment in windows 2008R2 and SQL server 2012. Same packages were placed on SAN disks in cluster environment and are invoked with security contexxt of admin user and wih dtexec utility ( we call this using an sp). rarely but procedure completed i see the log text file by SSIS package and find the return code 5 means package is unable to load. i canst found what are the exact reasons for return code 5.
exporting an SSIS from a 2008 R2 SQL Server and re-import it into a SQL 2012.
I have inherited an SQL 2008 R2 Server running two SSIS packages. I have both as .dtsx files and & a manifest to install.The task at hand is now to migrate those to a new SQL 2012 server.[There is also an IIS running on that machine and project-specific .dll and stuff within the IIS document root. I do not know if these .dlls relate to the IIS web page or to the SQL .dtsx) But the migrated web site runs fine].
Installation of the packages into the MSDB works out flawlessly and one of the scripts runs fine, but the other fails to run with the error:
"Error: the binary code for the script is not found. Please open the script in the designer by clicking Edit Script button and make sure it builds successfully."
Google tell me something about a "Pre-compile option"-setting on server as explained here. I can not find this option setting anywhere in SSMS. Also as the migration is from 2008 R2 -> 2012 this should not be relevant as in both versions pre-compilation should be automatic, right?
- I installed visual studio 2013 community
- I installed SSDTBI
- Start the "Integration Services Import Project Wizard"
to import the SSIS directly from the "Integration Services Catalog"
However, things don't quite work very well - Trying to import (from the locally installed SQL instance) it will not display the tree of SSIS stuff, but only the root directory.
So importing directly from the running system won't work. Let's see if we can get somewhere with the .dtsx As I _do_ actually have the .dtsx files here, why not open them up directly in Visual studio and try to get compiled whatever needs to be compiled.I create a new "integration Services" Project and open up the .dtsx into this project. ==> LOTS of errors of all kind.
(The job of this script is to fetch messages from an Exchange.)But opening up this specific bit of code doesn't work a bit - there is no binary code in it and how to reate it or where to get it from...