I have a requirement based on 2 parts to determine a row of data in a table through SSIS :
a. Work out the LEG_SEQ_NBR of the VESSEL, VOYAGE and LEG already stored in table1.
To do this join table1 and join table2 through BL_ID column.
b. Work out the 2ndVESSEL, 2ndVOYAGE and 2ndLEG.
Once we have identified the LEG_SEQ_NBR of the VESSEL,VOYAGE, LEG already stored in table1 we need to add '1' to this value and then find that LEQ_SEQ_NBR in table2.
The DDL of table1 and table2 along with the test data are as below:
Hi! I am new to SQL Server... looking for some veteran assistance.
"Data Integrity Report"
I need a Stored Procedure that takes a table name as a parameter and returns a cursor suitable as a data source for a pre-built Report Services report (I guess Report Services would call the SP?).
The cursor/report needs to have the following columns:
Ordinal_Position (I.E. Column number) Column_Name Number Of Blank Rows (how many missing values for this column in this table) Difference (Between total rowcount and population of this column)
Data_Type
Column_Length (either Character_Maximum_Length or the numeric widths rolled up with COALESCE?) Sample Data (The contents of the "first" row in the table, based on a TOP(1) and ORDER BY xxx) The report should look like this (for a table with 100 rows):
Col Num Col Name # Blanks Difference Data Type Col Length Sample Data 1 Name 12 88 varchar 30 Sally Smith 2 Address 34 66 varchar 45 123 Main St Apt 45 3 Acct_ID 0 100 varchar 4 AB12345
Using the "Information_Schema.Columns" I can get everything I need except for #3 (blanks count) and #7 (Sample data).
Is it possible to do this as 1 query, with a CTE or APPLY or something, or do I need to do a table variable based on the Information_Schema and then use dynamic SQL and row-by-row COUNT(*) for each column? And the same for the Sample Data.
Sorry for the long post, and thanks in advance! John
I want to access sqlserver table properties from asp.net .How do i know that a table defined in sqlserver has autoincrement field or not.Actually i have to access all the tables in a database and execute different function for table with auto increment on and off. I am not being able identify this property from codes.Please help.thank you
Does anybody know of a way to determine the last date/time a table has been accessed (query/update)? I've done enough research to know that this isn't easy. However, perhaps somebody has figured out a way through some of the stats that SQL Server keeps to determine the last access of a table. I have recently been put on a team that had no DBA and has a number of databases out there. They would like to determine which databases are inactive and get rid of them. I am a developer and haven't had much SQL Server Administration experience. Any info will help greatly! Thanks!
I would need to rewrite SQL code to determine that id is unique in the Customer table.
My two tables are:
CREATE TABLE CUSTOMER( [CUSTNO] varchar(5) NOT NULL, [ID] CHAR(9) NOT NULL, [NAME] VARCHAR(128) NOT NULL, [ADDRESS] VARCHAR(128) NOT NULL, [DATEOFBIRTH] DATE NOT NULL,
I need a SQL or TSQL command (not a stored procedure) that will determine if a table exists (TBL_PARAMETERS). The command needs to return a 1 if the table exists or 0 if it dose not exist.
I'm creating ssrs reports via the web service render method & would like to be able to determine when a report has no data.
Currently what I'm doing is rendering the report twice - once as a pdf (the format that the report needs to be in) and once as a csv. I then check the csv for a specific string placed in the norows property of the report table.
Is there a better way of doing this? It seems to me that this would have been a good candidate to include in the warnings array.
Between this issue & the hacks required to get data into the header I'm thinking that I should maybe reconsider some other reporting options...
I'm new to this whole trigger 'thing', so forgive me if this question has been asked and answered a few times already.
I'm in the process of writing a trigger that will send an e-mail to an application admin when any table within a given database is altered (IE, a column is added or deleted). I can get the e-mail to fire off when that happens without any issue, but I'd like to be able to let the admin know which table was tweaked and what new column was added.
Is this a relatively easy thing to do and I'm just not finding the right built-in variable name, or does something more need to be done?
Our reporting group would like to generate reports based on day of the week. Is there a function or other means of determining the day based on data in a datetime field? Thanks for any assistance in advance.
Does anyone know how to determine the base datatype of a column? I've tried using sp_columns but if there are user defined datatypes on the column it returns that name instead of the base datatype. I've also tries accessing systypes and syscolumns to determine the base datatypes.
I have several tables a varbinary column in a database. They have names like CSB_BLOB or OBJECT_BLOB. Now I am having intermittent success with getting the data out.
For example this query returns readable text from this data.
0x46726F6D3A20226465616E6E6167726.....etc --data as stored in the column
SELECT CAST(CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM OBJECT_BLOB
However this column has the following query results.
0x0001000000FFFFFFFF01000000000000000C....etc. --data as stored in column
--this query returns empty result
SELECT (CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM CSB_STATUS_LOG
--this query returns no change???
SELECT CONVERT(VARCHAR(MAX), CONVERT(VARBINARY(MAX), CSB_BLOB, 2), 2) FROM CSB_STATUS_LOG 0001000000FFFFFFFF01000000000000000C....etc
Obviously there is a difference between the two but I am not educated enough to interpret this difference. What do I need to learn / read so I can look at the data in one of these BLOB columns and know how to convert it to something meaningful?
Something like:
1. Try to cast as varchar to see if it is text. 2. Turn into a byte array and see if it is a jpg 3. Turn into a byte array and see if it is a pdf 4. Convert it to hex and then cast as varchar 5. etc....
I have a data table from a old system. There are 10 data fields stored in one row at this table. How can I seperate those fields into another table to be 10 rows? Any component in Data Flow can I use?(I have tried several component...) by the way, the original table is a large table, contain 3,000,000 rows.
Hi, I have an excel sheet in which there is some data in sheet1,sheet2.I need to transfer this 2 sheets data to single table using a single package.How can i do this in SSIS.
I have the following scenario: N identical Databases (corresponding to different Fiscal Years, with names <Company Name>.<YEAR>). We want to consolidate the N DBs to a New Datawarehouse.
In SSIS we have designed a Dataflow that reads through a OLE DB Source (Connected to one of the N Databases) and maps to a OLE DB Destination (Connected to the NEW DB).
The question is, how we loop in SSIS through the N identical Connections, so to repeatedly execute the designed Dataflow, each time with a different Connection?
My Requirement is Update Table 1 set Column::No=Table 2.ID
based on Exact Match of
Table1.Name=Table2.Name and
Table1.Add=Table2.Add
It means Get back the Id for Source Table 1
2nd Data flow Source(Table1:Name, Add,No) |
--LOOKUP(Table2:Name, Add::Matched Look Columns Name, Add and Tick Mark on ID) |(Match)
-->OLEDB Command: update Table1 set N0=? where RowID=?(Here Param_0= NO ,Param_1=RowID)
Here My Issue is if Table 1 had Duplicates(same Name, Add, but Row Id is different it is Updating Same ID for Table 1.No It means Get Back ID correctly not updating Result::
Table 1:
------- ----- ---- ---- Name Add No RowID ------- ----- ---- ------- aa #a-1,India 1 10 bb #a-1,India 2 11 aa #a-1,India 1 12
I'm moving data from one database to another (INSERT INTO ... SELECT ... FROM ....) and am encountering this error:
Msg 8114, Level 16, State 5, Line 6 Error converting data type varchar to numeric.
My problem is that Line 6 is:
set @brn_pk = '0D4BDE66347C440F'
so that is obviously not the problem and my query has almost 200 columns. I can go through one by one and compare what column is int in my destination table and what is varchar in my source tables, but that could take quite a while. How I can work out what column is causing the problem?
I'm trying to use Excel in SSIS to import the data from spreadsheet to a staging table. The package runs well from the web server using SSMS. But when I deploy and try to execute the package, I'm getting the below error. I've a question, whether I've to install the AccessDatabaseEngine driver in SQL database server or the web server where I'm executing the SSIS?
Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode.
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...
Hi I am Importing data in datatable using SSIS package . I made trigger on that table on insert. The trigger on insert is not firing on that table Please help Thanks CP
I am looking for a way to leave a Data Flow Task destination table name as-is, and have SSIS auto-create the table if it doesn't exist already.
I searched on this in the forums but based on the question it's difficult to kow if it has been answered or not.
Details:
I am writing some SSIS packages that need to be executable on another server. Many of the Data Flow Tasks copy data (such as from a Fuzzy Grouping transformation, and lots of other stuff) into a new table. But the other server will not have these tables set up for the first run.
My current solution is to check information_schema.tables and drop IF EXISTS. But, then the Data Flow Task will not work (becase table does not exist). So, I script to new window a create table statement based on the existing table that I use in my dev environment. This is a hack and I want to find a better method.
It is quite possible (although unlikely) that the source columns could be changed in the future, or some query used to pull the data might be modified. If this happens, then I would need to change the CREATE TABLE Execute SQL task. I want my package to accommodate without having to modify it.
When I use the Import/Export Wizard, I can select a table name from the drop down list OR type in a new name. When I type in the new name, it assumes I want to create the table. NOW, is there a way to mimic this in BI Developer Studio? Yep, I saved the Wizard version of the SSIS package and all it does is run a CREATE TABLE statement first.
I am looking for a way to leave a Data Flow Task destination table name as-is, and have SSIS auto-create the table if it doesn't exist already.
I've a SSIS 2008 parent/child package solution to manage data transfers between two different data sources, so we can copy multiple tables and capture how many rows were transferred and duration for each transfer. This solution was working fine up until last week, when I made some changes to allow the package to perform a source count using standard SQL determined by an expression, or SQL provided from configuration tables, I also changed the package to Truncate or not the destination table, again controlled by configuration settings in a table. The child packages which perform the data flows have not changed!
The day after the controlling package promotion to live, I saw the bizarre behaviour of the Package log stating all rows transferred, but the actual table counts were not what the log stated, see attached file. The package solution works ok on other servers and was ok in DEV, but there were less tables and rows transferred.Re-running the package gave the same errors, but on some of the same tables and some different ones.
As it is the child packages doing the transfers and nothing has changed in them. I cannot see how the log would be able to say all rows are transferred and yet not all of the rows are actually moved?
Process output - where you can see counts and log Table transfer controller (as txt not dtsx)
An example of the data transfer child packages (as txt not dtsx)When I set the ExecuteOutOfProcess = True the package worked fine, unfortunately, this is not a good solution as SSIS 2008 does not tidy up the Dtshost.exe processes it starts and I'd be left with a memory issue after a very short time, we transfer hundreds of tables each day. ( I could write a .net script in the controlling package to kill the child processes, but that would still have hundreds of processes running before I could end them, as we have three parallel streams to allow a bit better performance.
I am try to transfer some tables data from one database server into another database server. I create a package in SSIS, and I use a variable to pass each table name. In Data flow, I use a OLEDB Source, but I cannot set the Data access mode to Table name or view name variable. Ever time, I will get this following error info "===================================
Error at Data Flow Task [OLE DB Source [31]]: A destination table name has not been provided.
(Microsoft Visual Studio)
===================================
Exception from HRESULT: 0xC0202042 (Microsoft.SqlServer.DTSPipelineWrap)
------------------------------ Program Location:
at Microsoft.SqlServer.Dts.Pipeline.Wrapper.CManagedComponentWrapperClass.ReinitializeMetaData() at Microsoft.DataTransformationServices.DataFlowUI.DataFlowComponentUI.ReinitializeMetadata() at Microsoft.DataTransformationServices.DataFlowUI.DataFlowAdapterUI.connectionPage_SaveConnectionAttributes(Object sender, ConnectionAttributesEventArgs args)".
Some one can tell me what is the reason, or give me some examples.