Outputting Large Rows (>1000 Chars) Via Osql Or Isql?

Jan 25, 2001

Iam attempting to generate files containing more than 1000 characters per line by outputting the results of a stored procedure via osql to a flat file. Osql (and isql) appear to force a newline after 1000 characters, even when specifiying a -w2000 parameter.

I have also tried to output the results of the stored procedure via DTS and this appears to do the same thing!

Does anybody know how to prevent osql (or isql) from forcing the newline?

View 1 Replies


ADVERTISEMENT

Transact SQL :: Select 1000 Rows At A Time From / Into A Large Temp Table?

May 12, 2015

I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.

In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.

For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.

Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?

Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).

View 6 Replies View Related

Osql / Isql

Jan 17, 2008

Is osql/isql supported in SQL 2005? How about in 2008? Thanks

View 2 Replies View Related

ISQL And OSQL Output Lines Wrapped Around At 256 Characters?

Jul 20, 2005

I am trying to use a command line program to run a stored procedurethat generates output in a comma-delimitted format. Somehow, ISQL orOSQL always wrap the lines at 256 characters. I believe this hassomething to do with the column width switch (-w). But enlarging thecolumn width to 800 characters max still doesn't help. The followingis a stored procedure that is essentially doing what my storedprocedure is doing:create procedure MyTest asset ansi_padding onset nocount ondeclare @sTest varchar(300)-- Output three lines. Each line has 259 characters.select @sTest = "1234 6789 ... 1234 6789"print @sTestselect @sTest = "1 3 5 7 9 ... 1 3 5 7 9"print @sTestselect @sTest = "1 3 5 7 9 ... 1 3 5 7 9"print @sTestset nocount offreturn( 0 )I invoke this stored procedure using this command:isql -SMyDbSrv -E -dMyDb -w800 -x800 -h-1 -n -Q"exec MyTest"-oMyTest.txt-- or --osql -SMyDbSrv -E -dMyDb -w800 -h-1 -n -Q"exec MyTest" -oMyTest.txtBut they have the same problem. The output lines all wrap around at256 characters.Strangely, if I store the result in a temporary table, and then useSELECT to output the result from the temporary table, I will not havethat problem. Seem like the "-w" switch only works for output fromtables, but not for output coming from PRINT. Unfortunately, usingthis approach has another set of problems (one blank space in front ofeach line, "number-of-rows affected" shows up at the bottom).Therefore, I would like to stick with using PRINT statements to outputthe result.Please suggest a way to fix this line-wrapping-around problem.Thanks.Jay Chan

View 5 Replies View Related

Isql / Osql / Windows XP / Disable Automatic ANSI To OEM Conversion

Jan 18, 2006

Hello to all SQL Server junkies who work with non-English characters:For people running scripts from the command line using ANSI files withspecial characters, it is very important to use isql and disable"Automatic ANSI to OEM conversion":- This only affects isql from the command line, and no guiapplications- http://support.microsoft.com/?scid=kb;EN-US;153449- Start the "Client Network Utility"C:WINDOWSsystem32cliconfg.exe- Select the DB-Library Tab- Deselect "Automatic ANSI to OEM conversion"- Click OK or ApplyOr inject this registry entry:[HKEY_LOCAL_MACHINESOFTWAREMicrosoftMSSQLServer ClientDB-Lib]"AutoAnsiToOem"="OFF"Here are some useful lines from a batch script to query the currentvalue of the registry and reset if necessary. This is tested onWindows XP. It will query the registry, throw away the first threelines of output, and return the value of the third field on the fourthline. Delims lists one tab character and one space character. Type thefollowing all on one line:@FOR /F "SKIP=3 TOKENS=3 DELIMS= " %%A IN ('REG QUERYHKLMSOFTWAREMicrosoftMSSQLServerClientDB-Lib /v AutoAnsiToOem') DO@SET AUTOANSITOOEM=%%AType the "reg add" line all on one line:@IF /i "%AUTOANSITOOEM%" EQU "ON" (@ECHO************************************************** **********************@ECHO ****@ECHO **** We need to disable "Automatic ANSI to OEM conversion"@ECHO **** Please seehttp://support.microsoft.com/?scid=kb;EN-US;153449@ECHO **** This only affects isql from the command line@ECHO ****@ECHO************************************************** **********************@REMREG ADD HKLMSOFTWAREMicrosoftMSSQLServerClientDB-Lib /vAutoAnsiToOem /t REG_SZ /d OFF)Alternatively, you must use Unicode script files and osql.PS: Thank you to Erland Sommarskog for http://www.sommarskog.se and Robvan der Woude for http://www.robvanderwoude.com

View 1 Replies View Related

First 1000 Rows In 6.5

Oct 15, 2001

How to select the first 1000 rows from the tbale in sql server 6.5..?

View 1 Replies View Related

SQL Server Agent Job Log With 1000 Rows Limitation?

May 11, 2007

Hello,

It seems that there is some kind of limitation in SQL Server agent job log. For any job there is not more than 1000 rows of history. This can be seen with the following query:



SELECT j.name, count(*)

FROM sysjobhistory jh,

sysjobs j

WHERE jh.job_id = j.job_id

GROUP BY j.name

HAVING count(*) >= 1000;



How to modify the limitation? We really need more than 1000 rows.



r,

J

View 5 Replies View Related

Active Directory 1000 Rows Limitation

May 8, 2006

we are using
select distinguishedname, employeeid, sn, middlename, givenname, displayname, samaccountname, mail, cn, telephonenumber
from OpenQuery( ADSI, 'SELECT telephonenumber, distinguishedname, employeeid, sn, middlename, givenname, displayname, samaccountname, mail, cn, telephonenumber
FROM ''LDAP://DC=domain,DC=xxx,DC=xxx''
WHERE objectCategory = ''Person'' AND objectClass= ''user''

But it only returns 1000 rows which i read all around that is the default.

How do you set this to be higher.

Thanks

View 1 Replies View Related

Want Insert 1000 Rows Into SQL Tables Through Application

Mar 16, 2007

Hi

I have an PL/SQL procedure @ Oracle database which extracts 10000 rows from a table and Now I have load all the 1000 rows into SQL 2005 tables.

I have extracted the data from oracle into DataAdapter/dataset , Now I want to load all the rows to SQL 2005 tables. Please help how I can load..

If I use insert statement everytime , it makes server busy and takes much time for 10000 inserts to complete(even using Procedure goes heavy since for every insert have to call this).

Is there any possibility that i can pass the REF Cursor / Dataset/dataAdapter into SQL stored so that inserts will have happen all together ??



Thanks in advance for ur help.



Regards:

Nanjappa

View 3 Replies View Related

SQL 2012 :: SSRS - Put 10 Rows On First Page And Rest (1000+) Rows On Second Page

May 1, 2014

Is there any way to control this scenario, I know trick to put 10 on each row ,but I need to split them unevenly, 10 on first page and the rest on second page. is it possible ?

=Ceiling((RowNumber(Nothing)) / 10)

View 3 Replies View Related

ISQL - Blank Line At The End Of The ISql Output File

Sep 23, 2005

Hi,I'm using isql to query data and output the same to a flat file.The isql has the following command options ' -h-1 -w500 -n -b -s"" '.In the SQL_CODE, the first two lines before the select statement areuse dbnameset nocount ongoWhen I run this, an additional blank line is put into the output file.Actually, there are two lines after the last result set in the outputfile. This file is being fed into another system and the blank line iscausing validation issues.How can I supress this blank line?This script is run from windows and the isql is called from a batscript.Batch script ...================================================== ========.....isql -Uuserid -Ppassword -Sserver -i"%SQL_CODE%" -h-1 -w500 -n -b -s""[color=blue][color=green]>>"%OUT_FILE%"[/color][/color]IF ERRORLEVEL 0 SET RC=0IF ERRORLEVEL 1 exit 4================================================== ========SQL code ...================================================== ========use punclaimset NOCOUNT ONGOselect * from XYZ;GO================================================== ========Your help is greatly appreciated.Yash

View 3 Replies View Related

Does A Synchronous Transformation Process All Rows In A Buffer Before Outputting To Next Transformation?

Jun 5, 2006

Hi,

If you have two synchronous transformation components and the input of the second is connected to the output of the first, does the first transformation process (loop through) all rows in the buffer before outputting these rows to the second transformation? Or does the first transformation output each individual row to the second transormation as soon as it has finished processing it?

Thanks in advance,
Lawrie.

View 5 Replies View Related

TSQL - Select First 3 Chars Where Not Special Chars

Feb 5, 2002

Say I have a column called 'NAME' in a table called 'CLIENT' and the values in NAME are Surnames or company names like:

NAME
----------------------
1-FOR-ALL
A.B. SMITH (TOOLS LTD)
BROWN
THOMSON
VW CAR SALES



I want my select to return the first 3 characters, excluding special characters (only characters between 1 and z).

In example, the following would be returned for the data above:

NAME
----------------------
1FO
ABS
BRO
THO
VWC

View 1 Replies View Related

TSQL - Select First 3 Chars Where Not Special Chars

Feb 11, 2002

Say I have a column called 'NAME' in a table called 'CLIENT' and the values in NAME are Surnames or company names like:

NAME
----------------------
1-FOR-ALL
A.B. SMITH (TOOLS LTD)
BROWN
THOMSON
VW CAR SALES



I want my select to return the first 3 characters, excluding special characters (only characters between 1 and z).

In example, the following would be returned for the data above:

NAME
----------------------
1FO
ABS
BRO
THO
VWC

View 1 Replies View Related

Non English Chars Are Being Shown As Junk Chars

Jan 9, 2008

Hi All

I have loaded some data to the application using flat files

which has non english chars.

all the columns in the database are NVARCHAR type.

but in db and in application UI, the non english chars are being diplayed as junk chars. ???121

The application supports UTF-8 format

is there any setting at db level to be modified to display the non english char set as is.

Thanks

View 2 Replies View Related

Truncation Error: 255 Chars To 2 Chars.

Aug 27, 2007

Hi,

I am very new to using SSIS.

Trying to import data from MS Access 2000.

I receive the error "
[OLE DB Destination [1907]] Warning: Truncation may occur due to inserting data from data flow column "GENDER" with a length of 255 to database column "GENDER" with a length of 2. " on the source flow.

I have done some googling and came up with this post: http://torontosql.dotnetnuke-portal.com/Default.aspx?tabid=32233 which I thought may help, but it does not.

The query against the access datasource features the column: iif([sex]=1, 'm', 'f'). I tried using left(..., 2) as well, but SSIS is determind to treat the field as 255 characters for some reason.

I don't even particualrly care that the field is 255 chars and the sources is only two, I just want the data in! I have other fields coming up with similar error.

Can someone please advise?

PS, what is th significance of the "External Columns" Vs "Output Columns" on the Input and Output Properties tab in Advanced Editor?

I am really struggling with SSIS, it is not as intuitive as DTS.


View 1 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

Retrieving First N Rows From A Large Query

Feb 15, 2007

Sriram writes "Hi,

I want to retrieve only the first n rows from a query which returns a large number of rows.

Say,

select empno, name from emp where deptno=100

returns 1000 rows.

I want to improve the query so that it returns only the first 10 rows and not 1000 rows.

Thanks in Advance,
Sriram."

View 1 Replies View Related

Large Amount Of Pages For Few Rows

Jul 23, 2005

Hello,I have experienced that some of my tables occupies an extremely large amountof pages but with few rows. An example is a table with 37 rows over 22000pages !. The columns are simple integer and char. I fixed the problem byintroducing a clustered index. Now it only uses 1 page. But can anyoneexplain this behaviour in SQLServer 2000 ?regards Jakob Mathiasen

View 4 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies View Related

SQL Server 2012 :: Merging Two Large Tables (More Than 100m Rows)

Aug 18, 2014

SQL 2012

I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.

stg.fact is not a delta feed it is basically an intra-day refresh.

Both tables have a last updated date so its easy to see which have changed.

It will be new (insert) or changed (update) data that I am interested in, there are no deletions.

As this could be in the millions of rows that are inserts or updates then this needs to be efficient.

I expect whs.Fact to go to >150 million rows.

When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.

My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.

Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime

Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.

View 6 Replies View Related

Joining Large Fact Table To A View That Returns 120 Rows

Jan 19, 2015

I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.

Select
Dimension.Age_Band.[10_Year_Age_Band],
Count(*)
From
Fact.APC_Episodes
Inner Join Dimension.Age_Band ON
Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY
Group By
Dimension.Age_Band.[10_Year_Age_Band]

I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.

Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?

View 9 Replies View Related

AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)

Aug 30, 2007

I have a row that is being used log track plays on our website.

Here's the table:


CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]


There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.

I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:


CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId


When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.

How exactly is the pseudo 'inserted' table formed?

For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.

I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?

Thanks!

View 6 Replies View Related

SQL 2012 :: Deleting Large Batches Of Rows - Optimum Batch Size?

Oct 16, 2015

In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.

In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).

I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.

good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?

View 9 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

Transact SQL :: Adding A Column To A Large (100 Million Rows) Table With Default Constraint?

Apr 24, 2013

IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END

It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.

View 4 Replies View Related

Best Practices Question For Outputting

Sep 19, 2005

Hey guys,

Little bit of a newbie question here...I have a database with about 20
or so tables in a relational model.  I am now working on an output
scheme and had a quick question regarding best practices for
outputting.  Would it be best to

1) Set up a view that basically joins all of these tables together, then bind a DataSet/DataTable to it and output as needed?
2) Setup individual views for each table and run through them?

Thanks for the help!

e...

View 2 Replies View Related

Execute SQL Outputting To A File

Apr 25, 2007

I continue to try and find "easy" solutions to what should be a straightforward problem of outputting the results from a stored procedure to a file.



I tried using both XML task and file system task with the thought that one of those would actually be able to output a file from a variable, but both of those tasks threw fits when I tried using different variable types (file system required a string, but the XML result set never seemed to throw anything but an object) so I decided to just try a script task and do everything "manually".



So my latest gyrations have been thus:

1) Set execute sql task to output XML and push to a script task to write a file

2) Set execute sql task to output a full result set and push to a script task to write a file



Number 1 was the only one I could get working, because I kept getting this error with Number 2 that said the variable wasn't a recordset (maybe it was null?)



I can actually create files now via the script task, but it seems like the variable that should get the results from the stored procedure isn't getting anything. I tried using a MsgBox to see what was actually being passed to the script task, and all I got was the number 0 which I'm assuming is the default for the object type.



What's the best way to debug this? The package runs without errors, and I'm not familiar with debugging in SSIS. How can I tell if the stored procedure is returning results into the result set variable?

View 22 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved