Dumb Question: Best Practices To Validate Data Columns

Aug 15, 2007

[This is one of those cases where I think I know the answer, but I hope I'm wrong!]

I have a data flow which is processing data from the XML Source. There are 16 outputs from the XML Source. I have to perform a variety of validations on these outputs: things like "column 1 is required if column 2 has value 'a' or 'b'", or "column 1 or column 2 may be present, but not both".

For lookups and such, I use the Lookup component and its error output, both to redirect rows that fail the lookup, and to capture the data, column number and error code.

But, how do I do the same for "normal" validations?

If I have to use the Conditional Split transform, then I'll have to have one output per validation, and use a Union All to combine the rows again for output to an error file. This will also cost an extra "Derived Column" transform per output, in order to get a column number and possible error code per failed validation.


Worse, it's a pain to have to maintain all the columns in such a large "Union All"!

If I had the time, I might write a "Conditional Error" transform. It might be fun. But I have to be done by the end of this month, and don't even have time to create the UI for evaluating expressions!

Any tips or tricks or pointers to such would be very welcome.

View 9 Replies


ADVERTISEMENT

How To Validate Flat File Data Against Not Null Columns Of A Table?

Nov 27, 2007

Hi all,

I am facing a problem on validating the data from a flat file while inserting the data into the destination table of sql server 2005 database. In my package, i have to validate the input data whether the values are coming as null or not, before inserting into the destination database. The flat file may not contain data for all NOT NULL columns. I have to find out that row(s) and reject the record. If the rows are coming as Null for the Not Null columns, the OLEDB Destination throws OLEDB exception for the constraint.

To resolve this, i have an script component in data flow, to check whether the input data is coming as null. I have added the output column of boolean type to the script component, it will be assigned to TRUE when there is null for the Not Null column in the script.

And in the follwing conditional split, i am checking the flag for TRUE to reject the record.

Is there any other way to handle this validation?

Regards
Madhav

View 6 Replies View Related

Dumb Data Storage Question

Dec 13, 2006

Hello, So, here's my dumb question; if I wanted to store some *.gif images in some database (SQL2K possibly 2K5) field and wanted to pull the information from that to display on the web form, am I actually storing the image in the database or am I storing the location of the image in the database?I ask this because I was under the impression that the location to the image file is what was being stored but another person was saying that it was the actual image. I guess I'm confused... Thanks in advance.... 

View 4 Replies View Related

Validate Data Type Before Inserting

Feb 26, 2003

I have a temp table which is used to store data before inserting to the real permanent tables. All columns of the temp table have nvarchar type.

How do I validate the data in temp table before doing the actual insert? For example, I need to ensure a field is a valid Datetime (currently in temp of nvarchar) before inserting to a Datetime field. Can this be done using Transact-SQL?

Thanks.

View 1 Replies View Related

Write A Query To Validate Data

Sep 10, 2014

I'm trying to write a query to validate the data.

Here is the scenario:
1. The table has Three columns 1.ID, 2.Sqno, 3. Adj
2. The values for adj are (0,1,2)

Case1: The Sqno should start at '001000' for adj in (0,2) and increment by 2, i.e the next sqno would be '001002' and '001004' so on.
Case2: The sqno should start at '001001' for adj in (1) and increment by 2 i.e the next sqno would be '001003' and '001005' so on.

Finally when you do order by sqno and group by ID it will be a running sqno.

ID Sqno Adj
123A 001000 0
123A 001001 1
123A 001002 2
123A 001003 1
123A 001004 2
123A 001005 1
123A 001006 0
123A 001007 1
123A 001008 2
123A 001009 1

write a query that can validate this scenario.

View 2 Replies View Related

• Validate Data Content On Production And Secondar

Mar 14, 2008

Hi, we are using sql 2000, what would be the good way to validate different databases current data with seconday server's data after logshipping applied? thank you

View 3 Replies View Related

Recovery :: Validate Data In Transaction Logs Shipping

Jul 16, 2015

Out of using stored procedure, reports and all this staff, I want to know the possible way to make sure that the data inside my Secondary Server Read only database are same as data in my primary server database.

So what is the simple way to do this check?

View 4 Replies View Related

Transact SQL :: Finding Gaps And Filled With Last Validate Data?

Aug 26, 2015

currently I am facing a complex escenario related with gaps and sequences, but I was trying with diferent cases but I did not get the correct results, I am sure about the use of windows functions.  I have a table with the information grouped by PublicationId, Provider, MetricId and Amount by Date, one row by each month, but in some cases these data don't have a sequencial values, for example I have the data for the next sequence:

I need to get the sequence by each month, in this case I need to project the month from February to May (with the last previous value, for this case of January) , this is:

The data for testing are:

DECLARE @PublicationsByUser AS TABLE
(
  Id   INT,
  PublicationId  INT,
  MetricId       INT,
  ProviderId     INT,
  DateCreated    DATE,
  Amount         FLOAT

[code]....

View 14 Replies View Related

Best Practices For SQL Data Access

Nov 1, 2007

Hi,
i am newbie in ASP.net world. i am using 3 tier application architechture for my web based application. data base is sql server 2000. i have looked at object and sql datasource objects but i think they are not suitable for my requirements. so i am planning to directly use ado.net to access data from database.( i.e. creating connection, then creating commands n executing them)
now what i am looking for is the best known practices for the above task. i have following solutions in my mind please let me know if i am missing some or which could be the best aproach.


careate one class which will handle all the database requests so that all the pages and business objects request that class to to do all the db related stuff. (creating connection, command n execution)

View 4 Replies View Related

Accessing Database Data In ASP.NET 2.0 - Best Practices?

Dec 31, 2007

I was wondering if you guys know of a good site that talks about programmatically accessing and displaying data from a sql server 05 database in ASP.NET 2.0.I want to have a data adapter in a dataset, but I would like to create my own class file and pull the data from the adapter through code into the class. Is this the best way? Im wondering about the best practices while learning this new technology. Any articles provided would be appreciated. Thanks!

View 2 Replies View Related

Bulk Inserts To Data Warehouse - Best Practices?

Jul 20, 2005

Hello all,I just started a new job this week and they complain about the length oftime it takes to load data into their data warehouse,which they do once a month.From what I can gather, they rebuild the indexes before the insert with an80% Fillfactor, then insert the data (with theindexes enabled), then rebuild the indexes with a 100% Fillfactor.Most of my RDBMS experience is with a different product. We would havedisabled the indexes and Foreign Keys, loaded the data, thenre-enabled them, moving any records that violated the constraints into anappropriate audit table to be checked after.Can someone share with me what the accepted "best practices" are for loadingdata efficiently into a data warehouse?Any thoughts would be deeply appreciated.Steve

View 2 Replies View Related

Built-in Data Types In SQL Express: Best Practices?

Jul 21, 2006

Greetings,



I think these should be rather simple questions, yet I spent a number of hours last night digging through the forums here and msdn and couldn't find any satisfactory answers. Basically, there tend to be types of information that are commonly saved in most databases, like names, addresses, phone numbers, email addresses, etc...and there are a variety of built in data types in SQL Server. What are the best built in datatypes for some of the common entries in a sql database. Also, there are a number of character based types and I am curious why one would be more useful in certain situations than another. Why is there char( ), nchar( ), varchar( ), nvarchar( ) and text datatypes? Why so many? Also, what is the "text" datatype and when is it most likely to be used? There is very little about the text type that I can find in the msdn or SQL Server docs...aside from the fact that it's text. On top of all this, there's numerous binary types as well. I'm really not getting the reason behind all these different basic types and why I would want to use one over the other in any specific instance.



TIA,

Mark



View 14 Replies View Related

Database Insert Question - Best Practices For Empty Data

Apr 17, 2007

I am making a form that takes input for 1 to 5 students using VWD.  With the help of previous posts I have been able to make the database insert query work properly.  In my form I have a radio list that has the user select if they are entering information for 1, 2, 3,4, or 5 children.  Depending on how many children are selected on the radio list, I am displaying the proper number of textboxes and validating the data using the handy RequiredFieldValidator.  Now I am at the point where I want to perform the instert to the database depending on the selected number of children in the family.  What is the general rule for best  practices. Please keep in mind that it is my understanding that ALL fileds in a SQL insert statment must have data. Should I ...1) create alternative SQL statements depending on the textboxes displayed OR2) is it more common to insert a standard string or integer, depending on the datatype, into the unused textboxes to populate the unused fields? Sincerely,Mike 

View 2 Replies View Related

Better Practices Wanted For Cascading Inserts Of Hierarchical Data From Staging Tables

Aug 28, 2007

I apologize if this has been asked, but I can't find a complete answer.

We have a situation with parent/child tables which have an identity column as their PK. We need to be able to insert into the live tables from staging tables. The data in the staging tables are related via a surrogate key.

I have found the OUTPUT clause, but that can only refer to columns of the actual table (since there is no FROM clause in an INSERT). Our current best solution to this problem involves adding bogus "staging" columns to the destination tables, and removing them after we've inserted everything from staging. This is an unattractive solution to say the least.

I'll give an example that mirrors our actual solution, and ask if anyone has a better solution?
----------




Code Snippet
CREATE TABLE [dbo].[TABLE_A](
[ID] [int] IDENTITY(1,1) NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_A] PRIMARY KEY ([ID] ASC)
)
GO
CREATE TABLE [dbo].[TABLE_B](
[ID] [int] IDENTITY(1,1) NOT NULL,
[A_ID] [int] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_B] PRIMARY KEY ([ID] ASC)
)
GO
ALTER TABLE [dbo].[TABLE_B]
ADD CONSTRAINT [FK_TABLE_A_TABLE_B] FOREIGN KEY([A_ID]) REFERENCES [dbo].[TABLE_A] ([ID])
GO
CREATE TABLE [dbo].[STAGE_TABLE_A](
[A_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL
)
GO
CREATE TABLE [dbo].[STAGE_TABLE_B](
[B_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[A_Key] [bigint] NOT NULL
)
GO


The STAGING_COLUMN columns are the ones that will be added before, and dropped after.






Code Snippet
DECLARE @TABLE_A_MAP TABLE (
A_ID INT,
A_Key BIGINT
)
INSERT INTO TABLE_A (DATA, STAGING_COLUMN)
OUTPUT INSERTED.ID, INSERTED.STAGING_COLUMN INTO @TABLE_A_MAP
SELECT DATA, A_Key FROM STAGE_TABLE_A
INSERT INTO TABLE_B (A_ID, DATA)
SELECT TAM.A_ID, STB.DATA
FROM STAGE_TABLE_B STB INNER JOIN @TABLE_A_MAP TAM ON TAM.A_Key = STB.A_Key






This seems to work, but I'd really like another alternative. Even though this is happening when nobody else is using the database, I cringe at the thought of adding and removing columns just to make this work.

Here are a few of my constraints:



The above is a simplification of the actual problem. The actual problem goes about five levels deep (hence the B_Key in STAGE_TABLE_B). At the top level, our larger customer will have 100,000 rows to insert. Each level will average 3 times as many rows as the next higher level, so we're talking about real volumes here.

This has to finish over the course of a weekend.

This has to be delivered to QA this Friday
Thanks for any help or insight.

View 3 Replies View Related

Transact SQL :: Select And Parse Json Data From 2 Columns Into Multiple Columns In A Table?

Apr 29, 2015

I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:

I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.

1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.

SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B

If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.

2. My second question: How to i get around this error?

SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B,  fnSplitJson2(A.ITEM6,NULL) C

I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.

View 14 Replies View Related

A Word About Meta-data, Pass Through Columns And Derived Columns

Oct 13, 2006

Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):

Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.

This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.

Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.

Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.

Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS

View 5 Replies View Related

Not So Dumb SQL OR Question

Jun 10, 2008

How does OR work when mixed with AND.  This I know is elementary but my loss of SQL intellect is approaching total eclipse.
WHERE (Name LIKE @Kwrd0) OR ( Description LIKE '%' + @Kwrd0 + ' ' + @Kwrd1 + '%' ) OR (Keywords LIKE '%' + @Kwrd0 + '%') AND ((@Kwrd1 IS NULL OR Keywords LIKE '%' + @Kwrd1 + '%')  AND (@Kwrd2 IS NULL OR Keywords LIKE '%' + @Kwrd2 + '%') AND (@Kwrd3 IS NULL OR Keywords LIKE '%' + @Kwrd3 + '%')AND (@Kwrd4 IS NULL OR Keywords LIKE '%' + @Kwrd4 + '%')AND (Enabled = 1) AND (Display = 1) AND (Rank < 20))ORDER BY L_Rank
It qualifies on the first two words and ignores the rest of the statement.  Can OR and AND's be mixed?  The potential input is up to five words.
Thank you

View 6 Replies View Related

Dumb Question

Jun 10, 2005

Hi all,
I have a question. I have a database which is really big. I got into the enterprise manager and right click on the database and select property, I saw the
the size:7000MB and space available is 6700MB. Does that mean when the database initially created, it allocated the storage space is around 7GB and now even I delete some data in the database, it will not shrink the database size, it only makes the available space bigger? Is there any way I shrink the database size without destroy any data?

Thanks!
Betty

View 10 Replies View Related

Dumb SQL 6.5 Question....

Sep 3, 2004

I have a mixed environment of mostly SQL 2000 server with a few (3?) SQL 6.5 servers (the ? is there because every now and again, I find a new, undocumented server hiding behind a firewall that some developer just happens to be having an issue with).

Can I install SQL 6.5 client tools alongside SQL 2000 client tools on the same server?

Regards,

hmscott

View 1 Replies View Related

Dumb Question

Sep 29, 2007

Sorry about this. But I've worked primarily in access for years.Does SQL Server have a boolean or a yes/no data type for its table columns?Thanks in advance,Bill

View 2 Replies View Related

A Really DUMB Question...

Jul 20, 2005

I have 4 tables (AA,MAIN,CC,DD)Everything that is not in AA but is in MAIN I want put into CCEverything that is not in MAIN but is in AA I want put into DDAdd everything in DD to MAINClear AA CC and DDThanks in advance

View 3 Replies View Related

Dumb Question

Dec 13, 2006

Just bought a new computer, and want to install vb express. There is an icon in my system tray for Microsoft SQL Server Service Manager ver. 8.00.2039. Does this mean that I won't have to install SQL Server Express? I can't find anything in the Add/Remove programs that shows any version of SQL Server that was pre-installed.

thanks

richard

View 1 Replies View Related

Dumb Question

Feb 14, 2008

Hi everyone, I have a dumb question. I want to import some MS Access tables into SQL Server 2005 Express... there is supposed to be an import wizard in the binn directory, but it doesn't seem to be there... is this wizard only part of the full blown SQL Server 2005 package? I've searched through the SQL docs, but can't make heads or tails of it.

Thanks,
Noob

View 6 Replies View Related

Dumb Question

Jan 15, 2007

Simple question from a simpleton. I'm working through Microsoft Press SQL Server 2005 Reporting Services Step by Step. It references a database rs2005sbsDW, which as far as I can tell was not included with either the sample databases on the SQL Server (standard edition) install disk or the Step by Step book. Where the heck is it? What am I missing.

Trying to reinstall the sample databases from the SQL Server install disk's tells me I have everything installed already.

Thanks in advance.

View 3 Replies View Related

Dumb Question

Aug 7, 2007

I have a simple flow that loads a data table from some flat files. It works properly but I can't figure out how to add only rows that exist (so I won't get an error from the duplicate ID). I added a lookup that redirects records that don't match any ID, but when I run it I get a timeout error (?). It seems to pick up the right # of records to add, but when it gets to the SQL Server Destination it seems to generate a timeout.



SSIS package "ImportAL3.dtsx" starting.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002785.AL3" has started.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x400490F4 at Data Flow Task, LookupGrade [2832]: component "LookupGrade" (2832) has cached 11 rows.
Information: 0x400490F4 at Data Flow Task, LookupTestID [5608]: component "LookupTestID" (5608) has cached 0 rows.
Error: 0xC0202009 at Data Flow Task, SQL Server Destination [872]: An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Reading from DTS buffer timed out.".
Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002785.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002785.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002786.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002786.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002786.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002787.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002787.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002787.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002788.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002788.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002788.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002789.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002789.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002789.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002790.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002790.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002790.AL3" has ended.
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "SQL Server Destination" (872)" wrote 6 rows.
Warning: 0x80019002 at Data Flow Task: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Task failed: Data Flow Task
Warning: 0x80019002 at ImportAL3: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "ImportAL3.dtsx" finished: Failure.

View 5 Replies View Related

Dumb Question SQL Server 7.0

Sep 17, 1999

Hi !

I'm new to SQL-server 7.0

If I install the Desktop Edition on a Nt-workstation, can I then fully administer 6.5 installations of SQL-server. Now I have both SQL-server 6.5 and the SQL-server 7.0 Enterprise Edition but can't access my 6.5 databases because the SQL-DMO verstion is to low.

How does it all work ?

Regards
Joey

View 6 Replies View Related

Really Easy/dumb SQL Question...

Jun 21, 2004

Pardon me, my SQL knowledge is not advanced enough to know how to do this, though I'm sure it's pretty simple:

Let's say I have a table called products, with fields like SKU, name, price. And let's say I have a temporary table with changes to be made (updates) to the current products.. same fields, SKU, name, price. I basically would like to be able to update currently existing entries in the products table, with the changes shown in the temporary table. Example:

PRODUCTS:
T231,Crazy Stick,4.99
023J87,Basketball Hoop,12.99
GB-572,CD Rack,8.99

TEMP. TABLE:
GB-572,Wooden CD Rack,8.99
T231,Crazy Stick,3.95

So I'd like to just merge the products in temp table w/ the ones in products. How can I do this?

Thanks!

View 4 Replies View Related

Dumb Question - Significance Of N

Oct 12, 2004

I know this is a stupid question, but I just can't find the proper explanation. I often see the letter N preceding a parameter when executing a stored procedure (ie. exec sp_xxx @parm = N'test'). What is the significance of the N and when should it be used?

Thanks,
Roby2222

View 4 Replies View Related

Dumb Question - Index.

Jun 6, 2008

If I am running a cross-tab query on a table that has 15000 records in it to check specific records (It basically is running a table-valued function about 10,000 times

Here's the actual query

Select a.EmployeeID,b.*
from #TmpActiveEmployeesWSeverance a
cross apply
dbo.fn_Severance_AccountItemsTableBULKRUN(a.EmployeeID,a.BenefitTypeID,null,null,null,null,null) b


Within that function there is a sub-query on a table that is


Select col3
from T_Mytable a
where col1 = @EmployeeID
and Col2 = @BenefitTypeID


I can not figure out why there not ANY performance increase in having a non-clusterd index on T_Mytable(EmployeeID,BenefitTypeID)

Shouldn't Sql Reference this index when determining the plan execution, or is it because the record count is only about 10,000 records, so there is no need for sql to use the index?

Thanks for the clarification, I just would like to know why this is.

View 3 Replies View Related

Very Dumb Beginner Question

Jun 27, 2007

I need to develop a web-based (intranet) database application. Would SQL be the best product for the data storage? And would it be available "real time" for statewide input? (In other words, not needing replication).



Thanks in advance for your patience and assistance.

View 4 Replies View Related

Dumb 2000 - 2005 Question

Feb 20, 2006

OK, stupid but i've got to ask!

will a SQL 2005 backup file restore to a SQL 2000 server?

cheers
Fatherjack

View 2 Replies View Related

Noob Here, Dumb, But Quick Question

Oct 25, 2004

Ok, I know a LITTLE about SQL 2000. I am just starting to get my feet wet in the area. I know how to install SQL, backup DB's and Restore them. Can even do them over the network now. YAY ME! Anyway, here is my question. I have a production server with 4 RAID arrays. I have one for my OS, one for the Main SQL DB's (heavy transaction DB's), one for my log files, and one that is supposed to be for my temp DB. Well when I installed SQL it asked me for the default data path, which would be where my main SQL db's go. I dont remember it asking me where I want the temp DB to go. How do I change the location of the tempdb.mdf and tempdb.ldf to the drive arrays I want them to go to, even though I have already installed SQL.

Thanks. And sorry ahead of time for the noobiness of the question. I did do a search first, but didnt really see anything that would help me.

View 3 Replies View Related

Slightly Dumb Question About JOIN

Jul 20, 2005

Hello All,I'm trying to find out exactly what JOIN doeseg.SELECT A.NameFROM Author A JOIN Publisher PON A.SomeID = P.SomeIDWHERE P.Country = 'X'I know what inner, outer, right and left joins do, but what does justJOIN on its own do? (Can't find it in help either)Thanks,K Finegan

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved