I have a database that is being used as sort of a reports data
warehouse. I use DTS packages to upload data from all the different
sources. Right now I have it truncating the tables and appending with
fresh data. I was considering using updates instead and my question was
which is more efficent?
I need to copy data from one SQL table to another SQL table. Is is possible to use DTS to Append and update data from one table to another....along the line of using a Microsoft Access append or update query?
In SSIS, I have an Execute SQL Task that runs a direct SQL statement as follows to update an MS/SQL database table:
Update auditTable SET PackageName = @pkgname
In the parameters mapping, @pkgname is mapped to the System variable System::PackageName (as input).
This task runs fine if I use ADO.NET as the data source without any error.
But if I use OLEDB as the source it fails miserably and requires that the @pkgname variable be defined (as if it's a user variable, when in fact it was intended as a parm for the update statement only).
I need to write a single query that will append the values from one table into another table but I need to update a single field in the table with a static value. What I'm working with is an Access .adp with an SQL 2000 backend. The database is used to track ticket sales, payments, and contact info for season ticket holders. Prior to upsizing, each year the database would be copied and then choice bits modified for the next year.
In other words, the history was in several databases, i.e. db2001, db2002, db2003 and for the current year dbcurrent. I decided to create a historic tables in the current DB rather than have umpteen DBs on my SQL server.
My problem is that I need to create a query that appends to my table the previous years data and then updates the season field to reflect the season that the data came from.
In short, say I have a table named accounts with fields account, customer, addr1, addr2, ..., ticket type and I want it to end up in account_hist with fields account, customer, addr1, addr2, ..., ticket type, SEASON and make the season equal say 8 which represents the 8th season the team has played. I can get both queries to work seperately :D , but for end user ease, I want to perform both actions at the same time :confused: . Can anybody point me in the right direction? Thanks
I'm new to SQL server. I want to add or append a unique set of rows to a destination table from a source table, they are essentially the same table by definition. The source table is updated every hour via DTS, all rows deleted and new set added. Both tables have the same primary key. Approximately 40 unique rows are created each hour and I would think the best approach would be to append the new rows to the destination table. I think an Append query will run into a primary key conflict.
In Access, I did this within VB by checking the max value of the primary key and then running the append for any values greater than that.
In SQL, I'm not sure if this should be done as a stored procedure or if there is an easier approach altogether.
I have a series of .csv files created by a parts system. The .csv filename is in the format partnumber.csv The csv file contains a date column and 6 other fields. Each csv file is about 4000 records and there are approx 7000 .csv files that get re-generated once a week.
I'm using c# SqlBulkCopy object to import the csv file into a temp table. No problem there. It works realy fast.
What I need now is a way to move the data from the temp table to the final table and append the partnumber. I'm thinking it would be easy to pass the partnumber in as a paramerter to a t-sql query but not sure how to write the query itself. I also want to check if the partnuber/datestamp combination from the temp table already exists in the final table and skip it if it does. So I suppose I ultimately need an update query.
Once I have that query written it's easy to import a .csv, launch the update query, wipe the temp table and repeat with the next .csv file.
I'm sorry to be ignorant on this point. It seems trivial, but what's the difference between @@ and @ when using variables in T-SQL? I have a developer that always uses @@ for local variables and @ for reference variables (meaning variables declared as parameters for a stored procedure or function).
Is that purely stylistic? Is it a holdover from some previous version? Or is it a legitimate best practice that I've not seen before?
My google-shui is weak today; I found nothing when searching.
Both these tables contain considerable amounts of rows, but over time tableA will end up containing orphaned values (i.e. the a_id is not used in tableB) and this problem cannot be rectified by setting, for example, cascade deletes.
To fix this problem I decided to write a simple stored procedure to purge all values in tableA where its a_id is not used in tableB :
DELETE FROM tableA WHERE a_id NOT IN (SELECT a_id FROM tableB)
Now although the following document relates to postgres :
I wonder if anyone else out there has the same impression that I have: I find that DTS works much better than SSIS.
I find that DTS is so easy to use and reliable: it gets the job done and fast! On the other hand, SSIS seems to be so needlessly complex that it takes hours of troubleshooting just to get it to work, and sometimes it doesn't work at all. For example I have just spent hours trying to get SSIS to import a flat file with 300,000 rows. It just crashes and doesn't even give an error message so that one can fix it. On the other hand I have just now successfully accomplished the same task with DTS and it took me 5 minutes!
I honestly don't see a valid reason for using SQL Server 2005 instead of 2000. So far it's much more productive to use 2000.
I want to create a stored procedure that will take filtered entries from one table and insert them into another table. I have created stored procedures using variables but what is the best way of taking data from one table to another?
i hav column in table which already conatins data, now i want to append some more data to it.how do i do it so that earler content does not get deleted
Hi I'd like to create a table on our SQL server that I can append records to when running a query.
What i intend to do is create a new table with the same fields as the query then when the query runs i want it to append the results to this new table. Is this possible? How would I go about creating the table, primary key etc?
I've tried doing this through an access front end but its not efficient and was a bit of a struggle to be honest.
Also, we need other people who may not have access to enterprise manager to be able to run this append query. Danny
Hi All, Any suggestions / views / help on below question would be welcomed. I am building an asp.net 2.0 application with sql 2005 express as back end. My back end has 3 major tables which are: tblArticles - saves basic info on articles posted by user (like articleid, title, short desc, rating, views, etc) tblCategories - saves various categories and their hierarchies (id, parented, name, etc) tblArticleCategories - saves info on which articles fall in which categories (like articleid, categoryid) as of now, i am caching all rows from the first 2 tables, but i am in a bit of doubt for caching the third table (tblArticleCategories), although data in this table wont change very often and also this table will just have 2 columns and not many rows as well and this is a good target for caching, but the reason I am in a bit of doubt to cache this table is, when my website visitor clicks on any category link in the category tree view, I need to use an inner join across all these 3 tables to locate and return all articles found in that particular category. But I can do the same thing without hitting the database as I already have 2 of the required 3 tables in my cache, I can simply add the third table to my cache and then using the dataview objects rowfilter property on these 3 cached tables, I can very well get the appropriate results. But I wonder which of the 2 methods would you prefer and suggest, I mean do you feel that just to save hits against the database, I am going to far and doing a lot of crap using the dataview (which might not be as efficient as sql engine) or you feel that the inefficiency of the dataview will still win compared to the cost of hitting the database for this Thanks in advance, bye take care Raj Chaudhari, Mumbai, India (MCAD.NET) www.xtremebiz.biz
Hello all, I have table 'statistics' which holds information about another table, i.e. number of rows belonging to each user. Would I be better off using a trigger after each insert to increment a certain row. Or would I be better off selecting the data by means of an sql statement and updating the column whenever the statisitcs page is requested. Does sql provide any methods which allow a column to count other rows or columns?
Anyone here ever used the Informix database and can give me some differences between Informix and SQL.
One of our users is thinking about purchasing a COTS product that only supports an Informix database. I need to convince the user to evaluate other rival applications that can support SQL and need some arguments in favor of not going with Informix.
We currently use CA ArcServe (ArcServe 6.5 Enterprise and Single Server Editions) to backup our Windows NT files and MS-SQL Server databases. We have experienced significant reliability issues with ArcServe. Many times we have found ourselves rebuilding a corrupt ArcServe Job (ArcServe’s backup schedule) database. One of our NT server occasionally NT bug checks when ArcServe is performing backups. Occasionally ArcServe Jobs incorrectly reschedule themselves. Sometimes the Jobs do not complete but stay executing, not performing any work, and to cancel them may require a lot of effort. The ArcServe job DB repair utility generally does not work. The user interface is lacking. For example, the job scheduling options are very limited. CA tech support for this product has been poor. Because we have issues with ArcServe stability we are now evaluating Veritas (formally Seagate) Backup Exec for NT. What are other people’s experiences with these 2 products?
I gotta network tech that I work with from time to time. Hes gonna migrate a access database over to sql. He says it should be easy its a flat file can just do it through enterprise manager. I warned him that datatypes can become an issue (kinda have to know your db) he looked at me like I'm an idiot and proceeded to migrate the tables over to sql...Needless to say he got alot of error messages and is now totally confused. Now let me ask some experts who really Know Databases, do you ever have problems with Network Techs who think they know all
We are planning hardware purchases (more is better). One of our databases is 131 gigs in size and has 45 gigs of 'space available'. I'm not a very experienced SQL Server person, but this seems like quite a bit of 'space available'
1) Is there a way to regulate the amount of 'space available'? 2) are there any rules of thumb for how 'space available' there should be?
we are about to purchase new database servers and have been offered a good deal on 64-bit Xeon machines. At present we run SQL 2000 on Windows Server 2003 both of which are 32-bit versions.
Is there any problem using our current 32-bit Server software on the 64-bit machines (apart from not being able to utilise its full power)? I'm assuming the SQL 2005 licenses are the same price regardless of 32-bit or 64-bit version. If we buy a 64-bit SQL Server version license are we going to get the best out of it on a 32-bit Windows Server edition?
I have always been told that Cursors create a lot of overhead and consume a lot of system resources. Is it faster to store the data in a temp table and loop through it by using Select Top 1 and Delete statements or by using a static, Forward-Only Cursor? Both ways store the data in TempDB, but doesn't the While Loop statement generate more IO's than the Cursor? In theory, I am thinking that the Cursor is better. Any info will be appreciated.
I have table with a field defined as nvarchar. I want to change it to varchar. I have a stored procedure which defines the parameter @strCall_desc as nvarchar(4000). Are there going to be ay problems with running this sp if I just change the field type as described.
I am trying to get a better understanding of when to use return (witha print statement) and when to use raiserror.* Both statements can be used with stored procedures while only returncan be used with functions.* With raiserror it is easy to have multiple errors thrown. (If boththe calling procedure and called procedure both try to handle error)Wow. Thought I could think of more. So that really leaves me with verylittle info on the proper use of these two statements.
I have been searching for a way to associate a description with acolumn name. I have come across multiple posts regarding thisquestion. Problem is that I have seen two different answers.One post mentioned using the undocumented system table namedsysproperties while other posts mentioned using thesp_addextendedproperty (and fn_listextendedproperty).Which one and why one over the other?Thanks.
If a database has relationships establshed between all of the tablesvia primary and foreign key constraints, why isn't is possible to makea SELECT statement across multiple tables without using a JOIN?If the system knows the relationsip schema already why are JOINSrequired?Thanks,HC
This question probably overlaps a few different topic areas.
As I will be required to work with both Oracle and SQL Server I will be in a difficult position with SSIS(due to it's change in distribution).
Therefore I am having to look at alternatives.
With coding a can open a text file and parse it reasonably to my satisfaction. However getting the data into the database is incredibly slow.
I am using an Insert into for each line, which I am sure everone will shake their head over. This seems to be pretty slow even using transactions.
Is there any scope in using data tables or have the read on one thread and write on another.
Other than that is there an Oracle equivalent of SSIS which comes (probably get shot for asking that on a microsoft web site, but would probably get shot if I asked on Oracle forums as well).
In the past we had reasonable results in outputting to csv and then doing some sort of bulk insert, messy and irritating though this may be.
Any ideas on this area will be gratefully accepted.
Can some one tell me in basic terms the difference between a signed and unsigned integer? When would you decided to use one over the other? I'm looking for it more in layman terms than a technical bit level discussion.
Can anyone give me some advice on using authentication. What is the best way to go with a database on a server and why? And in order for you to use sql authentication, do you have to change the registry? I have seen some posts that seem to say you can only use it by changing the registry.
So if anyone can gie me the pro's and con's i would appreciate it.
my question concerns both desktop and device apps.
I'm using sql compact to store some data. I often have to store strings (descriptions, url, etc.) but I don't know when to use nvarchar or ntext.
Nvarchar needs to have a size limit, but I often set it to 8092 when I don't know the actual limit (urls can be very long !). I fear Ntext because I suppose there is performances impact.
Is there any "rules" to help to choose which data type I'd use ?