Working on DTS packages we used to have two different ways of exporting data to spreadsheet... We could do it by a single transformation task or either writing an ActiveX script, after running a procedure.
I am able to cover the first way already, while dealing with SSIS... But, I wonder if it is worth to research on how I would write a vb.net code to load data into a spreadsheet (considering we are advised to try vb.net instead of ActiveX while working on these new packages).
It is always a huge discussion to know what is the best way of doing something so my point here is to hear some of you and decide if I just keep exporting data by doing transformation tasks or if I will ever go trying to deploy a vb.net code to do that... What is the best on performance, etc...?
Hi I have migrated a DTS that had some activeX transformation tasks within data pump flow tasks.
Those parts were migrated as "DTS 2000 tasks" .. so activeX transformation tasks aren't possible in SSIS ? I know ActiveX script tasks are but for transformations ?
1. IF i leave these Encapsulated DTS 2000 tasks in the migrated SSIS package, will it run independently of the original DTS or does it need the old DTS running to "call" that part from ? (I hope im making sense here) is it possible to load this functionality internally into the new SSIS ?
2. How could I (if i can't do ActiveX transformation tasks) achieve this is SSIS ? can I achive this using the script tasks in SSIS ?
How would you do a log in a massive rows loading, I'm having problems because every row error(because of casting, format, lookup) in a transformation task is redirected to a text file as a log, this is ok when only exist one error by row, but in the case when I have two errors in the same row detected by diferents transformation tasks only the first one is reported to the text file, I have to wait to the second information load, after I correct the first error, to find the second one, I need to validate as many errors exists by row in the same load...
which component or which strategy can I use in a SSIS Packge to achieve this?
If you have two synchronous transformation components and the input of the second is connected to the output of the first, does the first transformation process (loop through) all rows in the buffer before outputting these rows to the second transformation? Or does the first transformation output each individual row to the second transormation as soon as it has finished processing it?
I'm sorry to be ignorant on this point. It seems trivial, but what's the difference between @@ and @ when using variables in T-SQL? I have a developer that always uses @@ for local variables and @ for reference variables (meaning variables declared as parameters for a stored procedure or function).
Is that purely stylistic? Is it a holdover from some previous version? Or is it a legitimate best practice that I've not seen before?
My google-shui is weak today; I found nothing when searching.
Both these tables contain considerable amounts of rows, but over time tableA will end up containing orphaned values (i.e. the a_id is not used in tableB) and this problem cannot be rectified by setting, for example, cascade deletes.
To fix this problem I decided to write a simple stored procedure to purge all values in tableA where its a_id is not used in tableB :
DELETE FROM tableA WHERE a_id NOT IN (SELECT a_id FROM tableB)
Now although the following document relates to postgres :
I wonder if anyone else out there has the same impression that I have: I find that DTS works much better than SSIS.
I find that DTS is so easy to use and reliable: it gets the job done and fast! On the other hand, SSIS seems to be so needlessly complex that it takes hours of troubleshooting just to get it to work, and sometimes it doesn't work at all. For example I have just spent hours trying to get SSIS to import a flat file with 300,000 rows. It just crashes and doesn't even give an error message so that one can fix it. On the other hand I have just now successfully accomplished the same task with DTS and it took me 5 minutes!
I honestly don't see a valid reason for using SQL Server 2005 instead of 2000. So far it's much more productive to use 2000.
Hi All, Any suggestions / views / help on below question would be welcomed. I am building an asp.net 2.0 application with sql 2005 express as back end. My back end has 3 major tables which are: tblArticles - saves basic info on articles posted by user (like articleid, title, short desc, rating, views, etc) tblCategories - saves various categories and their hierarchies (id, parented, name, etc) tblArticleCategories - saves info on which articles fall in which categories (like articleid, categoryid) as of now, i am caching all rows from the first 2 tables, but i am in a bit of doubt for caching the third table (tblArticleCategories), although data in this table wont change very often and also this table will just have 2 columns and not many rows as well and this is a good target for caching, but the reason I am in a bit of doubt to cache this table is, when my website visitor clicks on any category link in the category tree view, I need to use an inner join across all these 3 tables to locate and return all articles found in that particular category. But I can do the same thing without hitting the database as I already have 2 of the required 3 tables in my cache, I can simply add the third table to my cache and then using the dataview objects rowfilter property on these 3 cached tables, I can very well get the appropriate results. But I wonder which of the 2 methods would you prefer and suggest, I mean do you feel that just to save hits against the database, I am going to far and doing a lot of crap using the dataview (which might not be as efficient as sql engine) or you feel that the inefficiency of the dataview will still win compared to the cost of hitting the database for this Thanks in advance, bye take care Raj Chaudhari, Mumbai, India (MCAD.NET) www.xtremebiz.biz
Hello all, I have table 'statistics' which holds information about another table, i.e. number of rows belonging to each user. Would I be better off using a trigger after each insert to increment a certain row. Or would I be better off selecting the data by means of an sql statement and updating the column whenever the statisitcs page is requested. Does sql provide any methods which allow a column to count other rows or columns?
Anyone here ever used the Informix database and can give me some differences between Informix and SQL.
One of our users is thinking about purchasing a COTS product that only supports an Informix database. I need to convince the user to evaluate other rival applications that can support SQL and need some arguments in favor of not going with Informix.
We currently use CA ArcServe (ArcServe 6.5 Enterprise and Single Server Editions) to backup our Windows NT files and MS-SQL Server databases. We have experienced significant reliability issues with ArcServe. Many times we have found ourselves rebuilding a corrupt ArcServe Job (ArcServe’s backup schedule) database. One of our NT server occasionally NT bug checks when ArcServe is performing backups. Occasionally ArcServe Jobs incorrectly reschedule themselves. Sometimes the Jobs do not complete but stay executing, not performing any work, and to cancel them may require a lot of effort. The ArcServe job DB repair utility generally does not work. The user interface is lacking. For example, the job scheduling options are very limited. CA tech support for this product has been poor. Because we have issues with ArcServe stability we are now evaluating Veritas (formally Seagate) Backup Exec for NT. What are other people’s experiences with these 2 products?
I gotta network tech that I work with from time to time. Hes gonna migrate a access database over to sql. He says it should be easy its a flat file can just do it through enterprise manager. I warned him that datatypes can become an issue (kinda have to know your db) he looked at me like I'm an idiot and proceeded to migrate the tables over to sql...Needless to say he got alot of error messages and is now totally confused. Now let me ask some experts who really Know Databases, do you ever have problems with Network Techs who think they know all
We are planning hardware purchases (more is better). One of our databases is 131 gigs in size and has 45 gigs of 'space available'. I'm not a very experienced SQL Server person, but this seems like quite a bit of 'space available'
1) Is there a way to regulate the amount of 'space available'? 2) are there any rules of thumb for how 'space available' there should be?
we are about to purchase new database servers and have been offered a good deal on 64-bit Xeon machines. At present we run SQL 2000 on Windows Server 2003 both of which are 32-bit versions.
Is there any problem using our current 32-bit Server software on the 64-bit machines (apart from not being able to utilise its full power)? I'm assuming the SQL 2005 licenses are the same price regardless of 32-bit or 64-bit version. If we buy a 64-bit SQL Server version license are we going to get the best out of it on a 32-bit Windows Server edition?
I have always been told that Cursors create a lot of overhead and consume a lot of system resources. Is it faster to store the data in a temp table and loop through it by using Select Top 1 and Delete statements or by using a static, Forward-Only Cursor? Both ways store the data in TempDB, but doesn't the While Loop statement generate more IO's than the Cursor? In theory, I am thinking that the Cursor is better. Any info will be appreciated.
I have table with a field defined as nvarchar. I want to change it to varchar. I have a stored procedure which defines the parameter @strCall_desc as nvarchar(4000). Are there going to be ay problems with running this sp if I just change the field type as described.
I have a database that is being used as sort of a reports datawarehouse. I use DTS packages to upload data from all the differentsources. Right now I have it truncating the tables and appending withfresh data. I was considering using updates instead and my question waswhich is more efficent?
I am trying to get a better understanding of when to use return (witha print statement) and when to use raiserror.* Both statements can be used with stored procedures while only returncan be used with functions.* With raiserror it is easy to have multiple errors thrown. (If boththe calling procedure and called procedure both try to handle error)Wow. Thought I could think of more. So that really leaves me with verylittle info on the proper use of these two statements.
I have been searching for a way to associate a description with acolumn name. I have come across multiple posts regarding thisquestion. Problem is that I have seen two different answers.One post mentioned using the undocumented system table namedsysproperties while other posts mentioned using thesp_addextendedproperty (and fn_listextendedproperty).Which one and why one over the other?Thanks.
If a database has relationships establshed between all of the tablesvia primary and foreign key constraints, why isn't is possible to makea SELECT statement across multiple tables without using a JOIN?If the system knows the relationsip schema already why are JOINSrequired?Thanks,HC
This question probably overlaps a few different topic areas.
As I will be required to work with both Oracle and SQL Server I will be in a difficult position with SSIS(due to it's change in distribution).
Therefore I am having to look at alternatives.
With coding a can open a text file and parse it reasonably to my satisfaction. However getting the data into the database is incredibly slow.
I am using an Insert into for each line, which I am sure everone will shake their head over. This seems to be pretty slow even using transactions.
Is there any scope in using data tables or have the read on one thread and write on another.
Other than that is there an Oracle equivalent of SSIS which comes (probably get shot for asking that on a microsoft web site, but would probably get shot if I asked on Oracle forums as well).
In the past we had reasonable results in outputting to csv and then doing some sort of bulk insert, messy and irritating though this may be.
Any ideas on this area will be gratefully accepted.
Can some one tell me in basic terms the difference between a signed and unsigned integer? When would you decided to use one over the other? I'm looking for it more in layman terms than a technical bit level discussion.
Can anyone give me some advice on using authentication. What is the best way to go with a database on a server and why? And in order for you to use sql authentication, do you have to change the registry? I have seen some posts that seem to say you can only use it by changing the registry.
So if anyone can gie me the pro's and con's i would appreciate it.
my question concerns both desktop and device apps.
I'm using sql compact to store some data. I often have to store strings (descriptions, url, etc.) but I don't know when to use nvarchar or ntext.
Nvarchar needs to have a size limit, but I often set it to 8092 when I don't know the actual limit (urls can be very long !). I fear Ntext because I suppose there is performances impact.
Is there any "rules" to help to choose which data type I'd use ?
we are currently working in a project where we need to create reports in Visual Studio 2005. the parent screen has a drop down which specifies the name of the report along with some other parameters. then the report is displayed in the same screen. now the issue is we are pretty much confused as to what to use to generate these reports? HTML or SSRS? the input fileds are only from date and to date and the displayed fileds are also not that many.
Given the following objective: 1. Assume that I have a table that contains two fields: an auto-numbered id and an integer value2. Check to see if a record exists in a table based on a parameter query of the integer value3. If the record exists, return the record id4. If the record does not exist, insert a new record into the table (using the parameter value as data) and return the auto-numbered id of the new record I can do each of these things as a sequence of individual steps, of course, but it seems to me that I ought to be able to do it with a single udf (or perhaps a specialized query) that would be more efficient. I couldn't find something like this in the beginning SQL Express books I have on hand and I also didn't find anything exactly on point on this newsgroup or a search of Google. However, I am sure the answer is 'out there' and I am hoping that someone can point me in the right direction. Thanks! Duncan