Below is a simple Select statement performing a Lookup into a SQL database and returning the columns (associated with the Row) in to Cells on an eForm. The issue I have is there are 42 rows (which go up and down) and do not feel like writing 42 select statements.
select RiskDescriptor, RiskImpactLowDescriptor, RiskImpactMediumDescriptor, RiskImpactHighDescriptor from [Risk Descriptors] where [RiskDescriptor ID] in (1) order by [RiskDescriptor ID]; <<1@Cell104>> <<2@Cell105>> <<3@Cell106>> <<4@Cell107>>
I would like to add a loop, adding 1 to the RiskDescriptor ID and 4 to the Cells. So on second pass in the loop: RiskDescriptor ID = 2 <<1@Cell108>> <<2@Cell109>> <<3@Cell110>> <<4@Cell111>>
Third pass in the loop: RiskDescriptor ID = 3 <<1@Cell112>> <<2@Cell113>> <<3@Cell114>> <<4@Cell115>> and so on.
The Until portion of the loop can be hardcode (42 in this example) but would rather use an EOL or Query the DB for the total number of RiskDescriptor ID. This way when the DB changes (ID's go up or down) the SQL Statement does not need to be notified.
It is a JDBC call from within the eForm.
I would appreciate any help on how to format a loop in a SQL Statement
I recieve an error when I use the Lookup component in SSIS that reads:
Statement(s) could not be prepared.
I'm using a SQL 2005 DB as the source which runs into a lookup table and is use to compare records with an SQL 2000 Database. I've created connection managers successfully to both these databases. When trying to use the results of an SQL Query for the lookup to the SQL 2000 database (which is a linked server) and I try to map the columns, the error pops up and exits out of the lookup properties Window
The details to the error read:
Program Location: at Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.Connections.SQLTaskConnectionOleDbClass.PrepareSQLStatement(String sql, Boolean bypassPrepare) at Microsoft.DataTransformationServices.Design.DtsConnectionCommonControl.CheckSqlQuery()
I'm looking to use the results of this comparison to output in some form of a report. Ideas would be greatly appreciated!
I am using the lookup transform with the following settings:
Reference table: Use results of an SQL query. The query retrieves the surrogate key and four business key columns from a dimension table which contains a few thousand rows.
Columns: business keys in the incoming data are mapped to the business keys in the reference table, and the surrogate key is looked up from the reference table.
Advanced: Enable memory restriction is OFF (and the other items on the Advanced tab are greyed out).
I assumed that this means that the lookup transform would cache all of the rows in the SQL query, and then perform the lookups against this cache. This is the behaviour that I saw when I was running the package in my local environment in the BIDS debugger.
However, a colleague was doing some profiling on our production database server, and noticed that the lookup transform is instead issuing a single SQL query for each row of input. Our production ETL server has many GBs of free RAM available (way more than enough to cache the contents of the lookup table in memory), and given that memory restriction is disabled, I don't understand why the lookup transform is behaving in this fashion. Does anyone have an explanation for this? I'm probably misunderstanding a key concept here.
We did some "at scale" fuzzy lookup tests today and were rather disappointed with the performance. I'm wanting to know your experience so I can set my performance expectations appropriately.
We were doing a fuzzy lookup against a lookup table with 25 million rows. Each row has 11 columns used in the fuzzy lookup, each between 10-100 chars. We set CopyReferenceTable=0 and MatchIndexOptions=GenerateAndPersistNewIndex and WarmCaches=true. It took about 60 minutes to build that index table, during which, dtexec got up to 4.5GB memory usage. (Is there a way to tell what % of the index table got cached in memory? Memory kept rising as each "Finished building X% of fuzzy index" progress event scrolled by all the way up to 100% progress when it peaked at 4.5GB.) The MaxMemoryUsage setting we left blank so it would use as much as possible on this 64-bit box with 16GB of memory (but only about 4GB was available for SSIS).
After it got done building the index table, it started flowing data through the pipeline. We saw the first buffer of ~9,000 rows get passed from the source to the fuzzy lookup transform. Six hours later it had not finished doing the fuzzy lookup on that first buffer!!! Running profiler showed us it was firing off lots of singelton SQL queries doing lookups as expected. So it was making progress, just very, very slowly.
We had set MinSimilarity=0.45 and Exhaustive=False. Those seemed to be reasonable settings for smaller datasets.
Does that performance seem inline with expectations? Any thoughts to improve performance?
I'm working with an existing package that uses the fuzzy lookup transform. The package is currently working; however, I need to add some columns to the lookup columns from the reference table that is being used.
It seems that I am hitting a memory threshold of some sort, as when I add 3 or 4 columns, the package works, but when I add 5 columns, the fuzzy lookup transform fails pre-execute:
Pre-Execute Taking a snapshot of the reference table Taking a snapshot of the reference table Building Fuzzy Match Index component "Fuzzy Lookup Existing Member" (8351) failed the pre-execute phase and returned error code 0x8007007A.
These errors occur regardless of what columns I am attempting to add to the lookup list.
I have tried setting the MaxMemoryUsage custom property of the transform to 0, and to explicit values that should be much more than enough to hold the fuzzy match index (the reference table is only about 3000 rows, and the entire table is stored in less than 2MB of disk space.
Say I want to lookup a value in another dataset, but there is a grouping that requires you to know what the values for each level is in order to get to the correct detail record. Can you still use the lookup function with more than one field to compare against? So for example
Department \___SalesPerson \___Measure
I want to be able to add a new row at the Measure level, but lookup each field from another dataset. In order to do that I will need the Department AND SalesPerson values to do the lookup, but I dont think the Lookup function will let us do that will.
Actually this is in regard to SCD Type 2 Dimension, Scenario is like that I am moving Fact table from some old source and I have dimensionA description value in fact which I want to replace with appropriate id from Dimension Table and that Dimension table is SCD Type 2 based on StartDate and EndDate and Fact Table doesn't contains direct date value rather there is timeId in Fact so to update the value in Fact table I have to Join Time Dimension table and other Dimension Table to replace fact Description with proper Id.
I am doing a lookup that requires mapping 2 columns in the column mapping section. When I do this, I get the error "Row yielded no match during lookup" . The SQL that I captured in SQL profiler does find the record when I run it in Management Studio. I have already tried trimming everything to no avail.
Why is this happening?
I tried enabling memory restrictions but then I my package hangs and I get a SQLDUMPER_ERRORLOG.log file with the following logged:
I have a Conditional Split with 3 outputs. On the first output I have a lookup, when I execute the package I have 56 rows going through the Conditional Split, all rows are then going to the 2nd and 3rd output but the lookup on the first output generates an error "Row yielded no match during lookup".
I don't understand why the lookup is generating an error while there is no row going through it.
I am designing a ssis package,This is intends to mine text data(Data extracted from websites). Term lookup/Term extraction has been used as tools for mining. I have lookup terms defined with me for reference table,but the main problem lie in extracting the nearby text/number/charcters to these lookup terms during mining. For example : I found noun "Email" 200 (frequency score) times in my text,Now I want to extract nearby email address(this is also true for PhoneNumber,Address attributes also).so how can I achieve this with SSIS. If u have some idea/suggestion to carry out this challenge with or without Term Extraction/Term Lookup,plz do write here.
I have an existing table called OrderHeader with a column called ID. The ID column is currently set as an INT. I want to run a script to modify that column to be an IDENTIY. I've tried the following:
ALTER TABLE OrderHeader ALTER COLUMN [ID] INT IDENTITY
I am trying to modify data in a tble using the Stored Procedure below. It doesnt work properly. I seem to only be getting one character. I think it has something to do with me using "nchar" with the variables. The value I am changing is actually a string.
I have a DTS package package1 which imports an excel sheet into my DataBase.ANd its saved and scheduled and running fine.Actually the excel sheet will be located daily @ c:ruceexceldata.But now we wanted to change the location of the file to c:ruce1ewexceldata
I have a scheduled DTS package which gets the data from a text file and imports into a sql table.The table will be droped and created whenever the DTS package runs.previusly the table has 6 rows and now i wanted to add a column row_id which is having an idenity seed.So how can I modify my package so that I can get done with my requirements.
I have a query that I am working on that involves 2 tables. The query below is working correctly and bringing back the desired results, except I want to add 1 more column of data, and I'm not exactly sure how to write it.
What I want to add is the following data.
For each row that is brought back we want to have the COUNT(*) of users who joined the website (tbluserdetails) where their tbluserdteails.date is > the tblreferemails.referDate
Effectively we are attempting to track how well the "tell a friend" via email feature works, and converts to other joined members.
Any assistance is much appreciated!!
thanks once again mike123
SELECT CONVERT(varchar(10),referDate,112) AS referDate,
SUM ( CASE WHEN emailSendCount = 0 THEN 1 ELSE 0 END ) as '0', SUM ( CASE WHEN emailSendCount = 1 THEN 1 ELSE 0 END ) as '1', SUM ( CASE WHEN emailSendCount = 2 THEN 1 ELSE 0 END ) as '2', SUM ( CASE WHEN emailSendCount = 3 THEN 1 ELSE 0 END ) as '3', SUM ( CASE WHEN emailSendCount > 3 THEN 1 ELSE 0 END ) as '> 3', SUM ( CASE WHEN emailSendCount > 0 THEN 1 ELSE 0 END ) as 'totalSent',
count(*) as totalRefers, count(distinct(referUserID)) as totalUsers,
SUM ( CASE WHEN emailAddress like '%hotmail%' THEN 1 ELSE 0 END ) as 'hotmail', SUM ( CASE WHEN emailAddress like '%hotmail.co.uk%' THEN 1 ELSE 0 END ) as 'hotmail.co.uk', SUM ( CASE WHEN emailAddress like '%yahoo.ca%' THEN 1 ELSE 0 END ) as 'yahoo.ca', SUM ( CASE WHEN emailAddress like '%yahoo.co.uk%' THEN 1 ELSE 0 END ) as 'yahoo.co.uk', SUM ( CASE WHEN emailAddress like '%gmail%' THEN 1 ELSE 0 END ) as 'gmail', SUM ( CASE WHEN emailAddress like '%aol%' THEN 1 ELSE 0 END ) as 'aol', SUM ( CASE WHEN emailAddress like '%yahoo%' THEN 1 ELSE 0 END ) as 'yahoo',
SUM ( CASE WHEN referalTypeID = 1 THEN 1 ELSE 0 END ) as 'manual', SUM ( CASE WHEN referalTypeID = 2 THEN 1 ELSE 0 END ) as 'auto'
INSERT @Request SELECT '324234', 'Jack', 'SA023', 12, 111, Null UNION ALL SELECT '223452', 'Tom', 'SA023', 12, 112, Null UNION ALL SELECT '456456', 'Bobby', 'SA024', 12, 114, Null UNION ALL SELECT '22322362', 'Guck', 'SA024', 44, 123, Null UNION ALL SELECT '22654392', 'Luck', 'SA023', 12, 134, Null UNION ALL SELECT '225652', 'Jim', 'SA055', 67, 143, Null UNION ALL SELECT '126756', 'Jasm', 'SA055', 67, 145, Null UNION ALL SELECT '786234', 'Chuck', 'SA055', 67, 154, Null UNION ALL SELECT '66234', 'Mutuk', 'SA059', 72, 185, Null UNION ALL SELECT '2232362', 'Buck', 'SA055', 67, 195, Null
INSERT @Call SELECT 111, 1, 12123 UNION ALL SELECT 112, 1, 12123 UNION ALL SELECT 114, 2, 12123 UNION ALL SELECT 123, 2, 12123 UNION ALL SELECT 134, 3, 12123 UNION ALL SELECT 143, 1, 6532 UNION ALL SELECT 145, 1, 6532 UNION ALL SELECT 154, 2, 6532 UNION ALL SELECT 185, 2, 6532 UNION ALL SELECT 195, 3, 6532
INSERT @CallDetail SELECT 12123, 1, '11/5/2007 10:41:34 AM' UNION ALL SELECT 6532, 1, '11/5/2007 12:12:34 PM' -- --select * from @Request Query written to achieve the requirement UPDATE r SET r.UniqueNo = p.RecID FROM @Request AS r INNER JOIN ( SELECT r.RequestID, ROW_NUMBER() OVER (PARTITION BY cd.EmpID, r.StateNo, r.CityNo, c.CallDetailID, c.CallType ORDER BY cd.EntryDt) AS RecID FROM @Request AS r INNER JOIN @Call AS c ON c.CallID = r.CallID INNER JOIN @CallDetail AS cd ON cd.CallDetailID = c.CallDetailID ) AS p ON p.RequestID = r.RequestID WHERE r.UniqueNo IS NULL
I need help with modifying this procedure to join JobTypeGallery, Remodel on JobTypeGallery.TypeID and Remodel.TypeID. I would like for it the procedure to not allow deleting a record from JobTypeGallery if there are any records in Remodel Table that is associated with JobTypeGallery. Can someone please help me modify this stored procedure? Create PROCEDURE [dbo].[spDeleteJobTypeGallery] @typeid int AS delete from jobTypeGallery where typeID = @typeid GO
I probably know that I don't want to do this, but I have an odd case.
I have a database with a bunch of stored procedures and functions. 128 of them reference a different database than the one they're in. For testing puposes, I need to temporarily re-point these functions and procedures to a different database.
I suspect the answer will be a resounding 'No', but I was wondering if it was possible to somehow run a query against syscomments where I could update all of those objects with the temporary database name rather than edit each and every one of them. Conceptually, it would just be an UPDATE using REPLACE. Pretty comfortable with that, but I'm always very reluctant to mess with stuff like that.
I'll probably still just do it through QA, but I was wondering if it's possible to do something like that and thought it might be an interesting topic for discussion.
I want to drop a table from a publication , so that i can copy some data from another server . After the copy , i want to add the article or the table back again to the publication without making any changes to the subscribers configuration .
Is it really possible on SQL 7.0 ? Right now it does not allow me to copy the data to the published table or even drop the article (table) and it says that the article is published for replication and cannot be dropped or modified . The table is configured for transactional replication .
I will have to drop the entire publication and create all the subcribers again .
I've learned to create DTS packages, saving them in a SQL database. But how can one modify a saved DTS package? And how does one delete a saved DTS package?
I have a trigger that emails me each time an order is modified by a manager. I can't recall how to find out which user is performing the modification. Is there a global variable such as @@username?
I have some rather large TSQL scripts I'd like to schedule as jobs. Unfortunately, the SysJobSteps table in the MSDB Database is limited to 3200 characters while I need it to be 5000. Does anyone know if it's possible to increase the size of this to allow for larger TSQL scripts?
I would like to update some values in my table that have this value: XXXX-XXXX. I would like to update the '-' and change it to a '_'. Does anyone know the best way to do this?
I tried this but I get a "Subquery returned more than 1 value error"
UPDATE PS_MTCH_RULES SET MATCH_RULE_ID = (SELECT SUBSTRING (MATCH_RULE_ID,1,4)+'_'+SUBSTRING(MATCH_RULE_ID,6, 4) FROM PS_MTCH_RULES WHERE MATCH_RULES_ID <> 'ERS')
I have a field called "assescode9" in a table called "Assessment" which has the length of "6" and I want to change this length to 7 or 8 in SQL. I tried right click on the field and modify but when I want to save it says you can not save the changes because it requires a Drop table and recreate or you do not have an option enabled to recreate the table, how do I do this?
I have never done dropping a table and recreate it?
I am new to sql server 2000, and need a little help.
I have a table called CMRC_Products with various columns, there is one column called Product Images that has the name of every image in my catalog over 4000, I want to add to each row in that particular column .jpg without loosing what is already in there
I have tried:
UPDATE CMRC_Products SET ProductImage = ProductImage&' .jpg'