I need to extract a particular file from our AS 400 system on a daily basis and do some processing on it. Also I want to do the daily processing only on those records that have been added/updated since the initial load.
Here is the approach and possible implementation.
Approach
I have the DB2 source data containing 10 columns (Col1 to Col5 together form the key) and the rest are attributes. I am interested only in the key and two attributes. So I load my table with only Col1 to Col7 ( 5 for key and the two attributes). I then do my processing on this table.
Here is the implementation given by a member of dbforums -
You'll then have to deal with 3 potential actions,
INSERT: New records on the file.
DELETES: Records that don't exists.
UPDATES: Records that are on the file, but attributes have changed.
INSERT INTO myTable99(Col1,Col2,Col3,Col4,Col5)
SELECT '1','1','a','b','c' UNION ALL
SELECT '1','2','d','e','f' UNION ALL
SELECT '1','3','g','h','i' UNION ALL
SELECT '1','4','j','k','l'
--DELETED
INSERT INTO myTable00(Col1,Col2,Col3,Col4,Col5)
SELECT '1','1','a','b','c' UNION ALL--NO CHANGE
SELECT '1','2','x','y','z' UNION ALL-- UPDATE (My comment - Instead of an update I want to insert a new record)
SELECT '1','3','g','h','i' UNION ALL--NO CHANGE
SELECT '2','3','a','b','c'--INSERT
GO
SELECT * FROM myTable99
SELECT * FROM myTable00
GO
--DO DELETES FIRST (My comment - Before deleting, I want to copy the rows that I am going to delete on a separate table to maintain history. Then I want to delete from a). I don't get the logic. If the rows of the old extract are not in new extract then delete them. So shouldn't it be <> instead of =. why the where clause condition)
DELETE FROM a
FROM myTable99 a
LEFT JOIN myTable00 b
ON a.Col1 = b.Col1
AND a.Col2 = b.Col2
WHERE b.Col1 IS NULL AND b.Col2 IS NULL
-- INSERT (My comment - I don't get the logic of the where. If the rows of the old extract are not in new extract then delete them. So shouldn't it be <> instead of =)
INSERT INTO myTable99(Col1,Col2,Col3,Col4,Col5)
SELECT a.Col1, a.Col2, a.Col3, a.Col4, a.Col5
FROM myTable00 a
LEFT JOIN myTable99 b
ON a.Col1 = b.Col1
AND a.Col2 = b.Col2
WHERE b.Col1 IS NULL AND b.Col2 IS NULL
-- UPDATE
UPDATE a
SET Col3 = b.Col3
, Col4 = b.Col4
, Col5 = b.Col5
FROM myTable99 a
INNER JOIN myTable00 b
ON a.Col1 = b.Col1
AND a.Col2 = b.Col2
AND ( a.Col3 <> b.Col3
OR a.Col4 <> b.Col4
OR a.Col5 <> b.Col5)
GO
------------
Can anybody look at My comments and answer them or revise this code template if need be?
Brett Kaiser - I sent you an e-mail on this. Can you respond to it when time permits.
Ok this may have been answered somewhere but to be honestly I don't even know what I need so its hard for me to search for it. Here is what I have going on lets see if anyone out there can answer this one. I have a 20 year old system that is tied to some lab equipment that we use for coal testing. The equipment runs back to an old win98 PC which is running a DOS program that has long since been abandoned by the manufactures. I can't upgrade the program because it will mean upgrading the hardware which is somewhere in the neighborhood of $100k at least. The program saves everything in what I believe is a dBase IV file (.dbf) and using Excel I can easily see the data inside. What I am trying to do, if I can figure out a way other than writing my own custom app, is to do a daily export/import of the data from the dbf file into a more usable database, then create some spiffy forms to mess with the data. Each day the dbf file is wiped out by the ancient program and a new one is created. My database experience is somewhat limited I have at best a rudimentary knowledge of SQL but I should be able to follow along even if I have to do a little research. Mainly what I am looking for is a way to import the dbf file into an already existing database (new or existing table) automatically via a script or something. I want to retain all the data from past days with out editing the the original DBF file. Can this be done or am I just asking the impossible?
Here's a really annoying problem. Let's say you have a text file with 2 million rows.Delimiters all look good and rows are previewed well but the file has a missing row at say lin 1234567 - way deep in the file. When SSIS encounters the blank row, an error is raised and processing on the file STOPS! I verified this in by checking the SSIS log and have even developed an error routine to notify me via email when the error occurs (really cool if I do say so myself ). The main problem still remains - how to resume processing from the point of failure in the file? Any help is appreciated. Thanks.
I'm using SSMS 2012. I have multiple queries that are run daily(I also have a few that are run every Monday) that I want to schedule to run at midnight and export to a new xls file on the network drive to be used the next morning. Is there something that I can add to each query to accomplish this or would I have to set something else up to call the queries?
I want to check a table to see what rows have been updated today, then write to a text file some data from the selected rows; then I want to automate this (DTS package? TSQL stored procedure job?) to run every night at midnight. Is the DTS Wizard the best way, that's what I did, but have not confirmed that it is writing a new text file every day, and does each new version write over the old one?
I have a windows service that connects to a regular sql server 2005 database and basically bulk copies data into a SQL express database. Afterwards, I just want to copy the MDF file into a different directory. I keep getting the "Process cannot access file because it is being used by another process" error. I've tried changing the connection strings but nothing seems to work. I'm closing the connections in the code as well. Here is the code. Any thoughts or help would be appreciated. RefreshDB calls Sync 3 times and if all three calls are successful, it attempts to copy the database file to the specified location. This is where I get the error.
Public Function RefreshDB() As Boolean Dim success As Boolean = True Dim tables() As String = {"Job", "Equipment", "PMScheduled"} For Each tableName As String In tables If SyncTable(tableName) = False Then success = False Exit For End If Next If success Then 'copy the new database to the target directory Try File.Copy(My.Settings.DBFilePath, My.Settings.TargetDirectory + "EMField.mdf") Catch ex As Exception My.Application.Log.WriteEntry(ex.Message + "(RefreshDB)", TraceEventType.Critical) Return False End Try End If Return success End Function
Private Function SyncTable(ByVal tableName As String) As Boolean Dim reader As SqlDataReader Dim sourceViewName As String = String.Format("EMField{0}View", tableName) Dim connectionString As String = My.Settings.EMFieldConnectionString 'insert the path to the database into the connectionstring connectionString = connectionString.Replace("[DBFilePath]", My.Settings.DBFilePath) Try 'clear the target table first Using targetConnection As New SqlConnection(connectionString) Using truncateCommand As New SqlCommand(String.Format("TRUNCATE TABLE {0}", tableName), targetConnection) targetConnection.Open() truncateCommand.ExecuteNonQuery() targetConnection.Close() End Using End Using 'create a datareader from the source database Using sourceConnection As New SqlConnection(My.Settings.EMLiteConnectionString) Using readCommand As New SqlCommand(String.Format("SELECT * FROM {0}", sourceViewName), sourceConnection) sourceConnection.Open() reader = readCommand.ExecuteReader BulkCopy(tableName, reader, connectionString) reader.Close() sourceConnection.Close() End Using End Using Return True Catch ex As SqlException My.Application.Log.WriteEntry(ex.Message + "(sync)", TraceEventType.Critical) Return False End Try End Function
Public Sub BulkCopy(ByVal tableName As String, ByRef reader As SqlDataReader, ByVal connectionString As String) Try 'bulk copy from source to target database Using bulkCopy As New SqlBulkCopy(connectionString) bulkCopy.DestinationTableName = tableName bulkCopy.WriteToServer(reader) bulkCopy.Close() End Using Catch ex As SqlException 'throw exception back to calling sub Throw ex End Try End Sub
Hello, How do I import data from a text file into columns (DB_table) that I have created in SQL server 2005? Data in text file does not have any commas, semicolons, or spaces in between them. Each instance in the text file consists of 109 characters (real numbers). I also know that first 4 numbers go in particular field, next 4 go in another field, and next 6 are particular field, and so on. I also have created columns in a table respectively.
Hello, I have a DTS package that copies data from a fixed width file and inserts it into a SQL Server 2000 table. These files are generated daily, and usually contain about 200k records.
Sometimes, the format of the file between two days can differ slightly. I can't tell if the problem lies with the length of the record string, or the string terminator. The record string is supposed to begin with 'D1'. One file (as viewed in Notepad) will contain the records each in its own row, and each row will start with 'D1'. Another file will show that the first record starts with 'D1', but then the next record does not begin on a new line. Instead the next record continues at the end of the previous record. In this case, 'D1' is preceded with an unrecognized character as if it to indicate a carriage return.
Anyway, in SSIS, when I configure the row width of the fixed width file in the flat file connection manager to 386, and the beginning of all the records are all on their own row, the data processes without a problem. When the records do not all begin on their own row, the data will process fine if I change the row width to 385.
I don't have to compensate for this in DTS. Between SSIS and DTS, both flat file connections are configured with a row delimiter of LF, and no text qualifier. Why will SSIS and DTS process these files differently?
I cannot seem to get my flat file to write columns in error when inserting into a SQL table. I have tried a few examples from MS and did not get anything written to my flat file output. I have set the Source Error Output on this flat file and it uses a script task to created the error description and then write it to a Flat File Destination.
I am new to SSIS and have not had any formal training on it. However, I am very familiar with VS.Net/c# and SQL 2000 DTS - I need to get this working ASAP as there are 45 total flat files that need to be processed. Once I have this solved for one, the rest will follow suit.
I am new to Forum. So not sure if i am posting my problem uner the right topic.
We have a sql server 2005 enterprise edition 4 way cluster on windows 2003 advance server.
I am logshipping these database to a different server at a different location. My logshipping went fine until one the cluster server failed and the server instance failed over to another node. The backup that happened around that time got copied over to the secondary by the copy job. The log file that got copied to the secondary server tried restoring and i think it failed int he middle of restoring it. (You would think that the sql would knoe if the backup is in complete and will move on to the next file. Not sure what happened there.) There is no indication of the *.TUF file in the directory where i have the log files.
I tried restoring it manually and i got the following error
Msg 4319, Level 16, State 3, Line 1
A previous restore operation was interrupted and did not complete processing on file 'sessionlog1'. Either restore the backup set that was interrupted or restart the restore sequence.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
I looked in the msdb..log_shipping_secondary_databases and looked for the last file that it restored and tried restoring it again with the following restore command by removing and adding some of the keywords that you see after the "WITH" clause. MSFT do not recommand to use continne_after_error unless its absolutley necessary. I stilll get the above error.
restore log sessiondata
from disk = 'I:sql13qasmlogssessiondatasessiondata_20070901124516.trn'
with restart, CONTINUE_AFTER_ERROR, norecovery
When i add the restart int he with clause,
The restart-checkpoint file 'J:Microsoft SQL ServerMSSQL.5MSSQLBackupsessiondata.CKP' was not found. The RESTORE command will continue from the beginning as if RESTART had not been specified.
Msg 4319, Level 16, State 1, Line 1
A previous restore operation was interrupted and did not complete processing on file 'sessionlog1'. Either restore the backup set that was interrupted or restart the restore sequence.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
I checked it the backup directory and i can't locate the .CKP file.
Does anyone ever come accross this issue?
Is there anyother way i could recover this DB in a standby or norecovery mode.
Any kind of help to resolve this issue (beside copy the full backup and redo the whole log-shipping process again) would be appreciated. sicne my primary and secondary server are totally ina different location, i need to ship a tape, if i need a full backup. This is the 3rd time its happening on that cluster. its frustrating to ship a tape everytime this happens.
I have one task in which i have to collect lots of .txt file having ## delimiter my requirement is to convert the delimiter from ## to comma and save the new file with .dat extension in different folder.
I have done all required process and run the application which should flow like collect source .txt file do Script component processing and create new .dat file with processed data in Data Flow task, but in my Task the Source and Destination start on same time and process start after words which cause empty file or some time a.txt file data stored in b.dat file where as a.dat file is completely empty.
The process should flow in sequence but behavior is totally against the process, i am using Foreach Loop Container for pick up each file.
Coming from Oracle background:How do you carry thiese admin tasks in SQL 2000 Environment :Check instance availabilityCheck listener availabilityCheck alert log files for error messagesClean up old log files before log destination gets filledAnalyze tables and indexes for better performanceCheck tablespace usageFind out invalid objectsMonitor users and transactionsThanks for help.
Hi: I am having a problem with the Syntax. I am trying to get the daily value for a contract_nbr. The selection below shows that I am selecting by Case statement when day = 2 (2nd Day) and the month is a parameter provider by the user. For this particular example, I am using 2 (February). Thanks for the help !!!!
[code] Set @Req_Month = '2'
SELECT Distinct a.contract_nbr, Case when Day(c.Beg_eff_date) = 2 and month(c.Beg_eff_date)= @Req_Month Then c.rcpt_nom_vol-c.rcpt_fuel-rcpt_act_vol Else 0 End As Day_2 from TIES_Gathering.dbo.contract a Inner Join TIES_Gathering.dbo.NOm b on a.contract_nbr = b.contract_nbr Inner Join TIES_Gathering.dbo.Nom_vol_detail c on c.Nom_id = b.Nom_id where (a.contract_sub_type = 'INT') and (a.Contract_type_code ='GTH') and (DATEADD(mm, DATEDIFF(mm,0,getdate()), 0) < a.current_expirtn_date) and (c.rcpt_dlvry_ind ='R') Group by a.contract_nbr [code]
In practice, I find encryption kind of messy to use -- opening, closing keys, use of certificates, all the while trying not to give away the password in cleartext. If our .NET programmers are to use it efficently in high-level code we need a function call, say "EncryptValue(<input value>)", that simply returns the encrypted value. Is it possible to write such a function? (And of course, we need the mate "DecryptValue(<encrypted value>)".
I have a database with multiple items each day. I'm looking to extract monthly information from this data. So I'm thinking that I would like to have a drop down list with each available month in the list. How would I extract which months (for multiple years) have data from a datatype = smalldatetime? I'm using c# Thanks!
I have to import data into 2 tables on a daily basis. The data is provided as a flatfile. In order to fullfill this task the tables have to be truncated first. Are there any possibilities to do this job automatically with dts, or do I have to write an Interface in VB or something like that?
I am not a SQL Server expert, normally work with a few other databases. We are running SQL Serer 2008 R2. I need to determine by day how much log space is being used. This is needed so I can size the file system appropriately to handle an outage of our backup system. The goal would be to have the log file system large enough to handle 24 hours worth of logs. I have found statements for current log usage, but not one for daily total logs generated.
Ladys, Gentlement, I have table that grows anywhere from 200,000 to 1,000,000 records perday. Besides that I need to keep at least 6 months historical data from this same table. The transaction log was purged after each batch when testing data monthly. I'm looking for some way of deleting just one day's data if it meets a criteria. It must remain within the 6 months period of historical data. This is what I've come up with so far"
select * FROM dbo.Temp_table WHERE datediff(day, DATE_TIME, getdate()) >= 180
If it meets this criteria I can change the select to a delete? Please Let me know what you think
I am trying to Invoice all of the records in my 'Orders' table. After each of the records has been invoiced I would like SQL to flag that record as having been completed so that when I run my query the next day it will ignore those having been completed already. Any feedback would be greatly appreciated.
This is the query I wrote to invoice one Order at a time by specifying each Order_Num seperately. As you can tell...I'm a n00b. Thanks all.
select convert(varchar, getdate(), 107) as Date select order_num as 'Invoice No.' from orders where order_num = '20009' select c.cust_name as Customer, c.cust_address as 'Street Address', c.cust_city as City, c.cust_state as State, c.cust_zip as 'Zip Code' from customers as c, orders as o where c.cust_id = o.cust_id and order_num = '20009' select oi.order_item as 'Line Item', oi.quantity as QTY, p.prod_name as 'Product Name', oi.item_price as 'Sale Price', oi.quantity*oi.item_price as Total from orderitems as oi, products as p where oi.prod_id = p.prod_id and order_num = '20009' order by oi.order_item select sum(quantity*item_price) as Subtotal, sum(quantity*item_price*.089) as 'Tax 8.9%', sum(quantity*item_price)+ sum(quantity*item_price*.089) as Total from orderitems where order_num = '20009'
What daily/weekly checks do you guys currently perform on your servers and databases?
I recently ran across with an article from SQLServerCentral that listed a couple of daily checks that I'm thinking about implementing on my environment, and some of them are: DB Missing Recent Backup - Report DB Missing Recent Log Backup - Report Drives Low on Disk Space - Report Error Log Messages Report - Report Instance Recently Restarted - Report Job Failures - Report Large Databases Log File - Report
I already have in place: Verify is SQL Agent Service is running Check Disk Space Available
Since I'm going to spend some time on this, I was wondering if there's anything else that you guys have in place or any other 'nice to have' that you guys also might have, so I don't leave anything behind...
Hello all,I have the following requirement (SQL 2000, SP2)1. Need to restore production database(A) to another database (B) onanother server (No direct connection)2. In the restored back-up(B) SP codes should not be visible (even tosa) or all SP's can be deleted .3. This needs to be carried out daily4. New tables may be added frequently to A5. This job should be scheduled6. I can overwrite the entire database (B)7. I can not encrypt SP in the original database (A)I thought of Snap-shot replication first. But when new tables areadded , I need to add new tables manually to publication .Can some one suggest most reliable and implementable method ?ThanksSrinivas
I'm just a newbie with XML. Now we're moving all our DTS to SSIS packages. Most of them are simply processes that takes a plain file and loading these data into Sql table or in inverse way. So that I wonder what role will play XML here.
I mean, are there big differences between store data as XML format and nvarchar?
I need to create daily check list for SQL 2000. Here is what I have done:
1. Check the connectivity of each server over the network. (Ping/open Enterprise Manager and proof connection) 2. Check whether the services are running for each server. (Green light) 3. Check the scheduled tasks on the production servers are running normally. Enterprise Manager of each server or your email (Set up SQL Mail to notify you). 4. Check the hard disk space available on the SQL Servers. 5. Check all the database and transaction log space on each server. If the database or transaction log space runs out, the transactions will fail. 6. Check NT event Logs for any error messages. 7. Check SQL Error Logs for any errors occurring within SQL Server.
My question is: any other tips & ideas for every day routine?
I have a client that is using the free MSDE database engine. There is no graphical interface that I know of to manage the day-to-day tasks. I need to do a daily backup to a disk file. I want to start the backup at 7:40pm. My client then will copy that disk file to tape duing his system-wide backup at midnight. I setup a batch file and executed it in osql but the job is not working. No disk file has been created in the last 3 weeks. I don't have a clue what I am missing. Can someone please look at this and tell me how to fix it??
Thanks in Advance!
Debbie Erickson USE master EXEC msdb..sp_delete_jobserver @job_name='PetDB Backup', @server_name='Server06' GO USE master EXEC msdb..sp_delete_jobschedule @job_name='PetDB Backup', @name = 'ScheduledBackup' GO USE master EXEC msdb..sp_delete_jobstep @job_name='PetDB Backup', @step_id=0 GO USE master EXEC msdb..sp_delete_job @job_name='PetDB Backup' GO USE master EXEC msdb..sp_add_job @job_name='PetDB Backup', @enabled=1, @description='Backup Petdb' GO USE master EXEC msdb..sp_add_jobstep @job_name='PetDB Backup', @step_name='Nightly petdb maint', @subsystem='TSQL', @command='BACKUP DATABASE Petdb TO DISK = ''E:PetlicData BackupPetdb.bak''', @database_name='petdb' GO USE master EXEC msdb..sp_add_jobschedule @job_name='PetDB Backup', @name = 'ScheduledBackup', @freq_type=4, @freq_interval=1, @active_start_time='194000' GO USE master EXEC msdb..sp_add_jobserver @job_name='PetDB Backup', @server_name='Server06' GO
I have a database of details of users that visit my site. Among other things, it saves the ipaddress and the date/time of the visit. I would like to be able to determine the unique visitors (by ipaddress) that i have on any given day. The problem though is that the data is saved as a date and a time, thus when I try and determine the unique visitors, since they are all a different time, MSSQL thinks they are all unique visitors. Any ideas how I can get around this without acctually modifying the data in the underlying table?
My goal is to recreate a table daily so that the data is updated. This could be a bad decision performance-wise, but I felt this was simpler than running a daily update statement. I created a stored procedure:SET QUOTED_IDENTIFIER ON GO SET ANSI_NULLS ON GO
CREATE PROCEDURE sp_CreatetblImprintPhrase
AS
DROP TABLE tblImprintPhrase
GO
CREATE TABLE tblImprintPhrase ( CustID char(12), CustName varchar(40), TranNoRel char(15) ) GO However, I was looking to edit the stored procedure, changing CREATE to ALTER, but when I do so, I am prompted with: Error 170: Line 2: Incorrect syntax near "(". If I change back to CREATE, the error goes away, but the sproc cannot be run because it already exists. Any thoughts?
I am implementing an application that is looking into another System’s database (DBF). I need to query out new orders that have been put in daily. What is the best way to handle this situation? Should I write a service or have a trigger setup in MSDE? I just need to do a simple SELECT statement that grabs orders between a date range and update my SQL database (MSDE). I am not looking for specific code examples, just a solution.
hi, my company needs to import 3 access or excel ,customer order table, into ms sql database daily.orders(basic customer info), items(product info and quanity), options (options of quantity). The problem is that 3 table has to get additional column when it is imported into sql. For example, when an order comes in its intranet application, it will keep track if it is in stock or out of stock checking the product table. those 2 databases ms sql and access or excel has identical columns but sql one has more in addition . This is my first time to work this kind of case and if somebody could give me a abstract step or suggestions to accomplish this.I'll be very happy;)