Lookup Failures - Prompt For New Entries Through UI Control?
Dec 7, 2007
Hi All,
Sorry if this is an obvious question, I am new to SQL Server.
For the failures of a data flow lookup transformation, I would like to add a prompt through some kind of UI control (eg. datagrid) for new lookup entries, which I would then take the new entries, redo the lookup and continue on with my process.
Or, is there a better way to include adding new lookup entries to my process?
Does anyone know how to reference the ongoing dataset within the dataflow in sql?
So here is the scenario I have.
1) OLE DB SOURCE 2) Lookup a value and that column gets added to the dataset 3) Lookup another value but this time I would rather code the sql rather than select a referencing table. How do I reference the current dataset?
I am not sure If I can do this with a lookup, but what I would like todo is perhaps use lookup to retrieve a control date from anunassociated table to control what date is entered in another table.For example :the main table , table 1 has many entries with a field called date_enter which is the date the record was entered.table 2 has a control_dateIf the date entered in table 1 is less than or = to the control date wewant to give the user a error message.I am thinking of using display only field and lookup to set itWould need to have the date value from the control table available tothe active table of table 1 when entering the the date_enter.However as there is no join field between the two tables am not surehow to do it. Was thinking might have to add a key field that wasallways null and in the BEFORE EDITADD EDITUPDATE section set it sothat the key would be nullAm using Informix 5 , Any help would be apprecia
If I wanted to search for Jobs as a particular status (e.g. 0130) and wanted to keep the jobs at this status until it has reached 0500, 0125, or 0900 in it's subsequent status log entry, how can I write the SQL for it to achieve it?
I have the following SQL which searches for the Jobs at 0130, but don't know how to develop it further to search on the requirement above.
------ SQL ------- SELECT job.job_number, (SELECT MAX(jsl.job_log_number) FROM job_status_log jsl WHERE job.job_number = jsl.job_number AND jsl.status_code = '0130') as Last_Early_Warning_Status_Entry
[code].....
In the job_status_log table above, there is a job_log_number field which increments by 1 when there is a new status log entry.
We did some "at scale" fuzzy lookup tests today and were rather disappointed with the performance. I'm wanting to know your experience so I can set my performance expectations appropriately.
We were doing a fuzzy lookup against a lookup table with 25 million rows. Each row has 11 columns used in the fuzzy lookup, each between 10-100 chars. We set CopyReferenceTable=0 and MatchIndexOptions=GenerateAndPersistNewIndex and WarmCaches=true. It took about 60 minutes to build that index table, during which, dtexec got up to 4.5GB memory usage. (Is there a way to tell what % of the index table got cached in memory? Memory kept rising as each "Finished building X% of fuzzy index" progress event scrolled by all the way up to 100% progress when it peaked at 4.5GB.) The MaxMemoryUsage setting we left blank so it would use as much as possible on this 64-bit box with 16GB of memory (but only about 4GB was available for SSIS).
After it got done building the index table, it started flowing data through the pipeline. We saw the first buffer of ~9,000 rows get passed from the source to the fuzzy lookup transform. Six hours later it had not finished doing the fuzzy lookup on that first buffer!!! Running profiler showed us it was firing off lots of singelton SQL queries doing lookups as expected. So it was making progress, just very, very slowly.
We had set MinSimilarity=0.45 and Exhaustive=False. Those seemed to be reasonable settings for smaller datasets.
Does that performance seem inline with expectations? Any thoughts to improve performance?
I'm working with an existing package that uses the fuzzy lookup transform. The package is currently working; however, I need to add some columns to the lookup columns from the reference table that is being used.
It seems that I am hitting a memory threshold of some sort, as when I add 3 or 4 columns, the package works, but when I add 5 columns, the fuzzy lookup transform fails pre-execute:
Pre-Execute Taking a snapshot of the reference table Taking a snapshot of the reference table Building Fuzzy Match Index component "Fuzzy Lookup Existing Member" (8351) failed the pre-execute phase and returned error code 0x8007007A.
These errors occur regardless of what columns I am attempting to add to the lookup list.
I have tried setting the MaxMemoryUsage custom property of the transform to 0, and to explicit values that should be much more than enough to hold the fuzzy match index (the reference table is only about 3000 rows, and the entire table is stored in less than 2MB of disk space.
I have a number of DTS packages which when run manually complete successfully however, when run as scheduled tasks they always fail. Can anyone offer any advice?
Where I work, we use a lot of triggers on our tables. And most of the time we use them to send email out (using xp_sendmail). For example, a user enters data and there is an insert trigger on that table to send email out to the appropriate individuals. This is all well and good until the SQL Mail Agent stops running for one reason or another. People try to enter their data and because there is an insert trigger on the table which tries to send mail out the entire transaction fails and the data can't be saved simply because the trigger can't successfully execute xp_sendmail. Does anyone know of a way around this? A better way to accomplish this? Any suggestions are appreciated!
Hi. I am running SQL2000 standard edition on a Windows 2K server. I have this annoying problem that the jobs I create through the maintenance plan wizard fail consistantly on a certain database that has one table in it. Just the database integrity check and update statistics part of the maintenence plan fail, the backups are fine. This will occur on a brand new database, with the default options selected and with only one empty table in it. The script to create the table and indexes is below. The errors I get from the maintenance plan are:
--Check Data and Index Linkage [Microsoft][ODBC SQL Server Driver][SQL Server]DBCC failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER, ARITHABORT'.
--Update QP Statistics [Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 1934: [Microsoft][ODBC SQL Server Driver][SQL Server]UPDATE STATISTICS failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER, ARITHABORT'.
I have messed around with setting these db options on and off with no effect on the success of the maintainence plan. I am wondering if anyone can replicate the problem on there own installation or have any thoughts on how to fix. Oh if I take the formula off of the table the jobs run successfully. Thanks in advance.
Script: /****** Object: Table [dbo].[UserLog] Script Date: 4/10/2001 2:39:48 PM ******/ if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[UserLog]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[UserLog] GO
I have a DTS Job that is reporting failures but it looks to me as ifthe job is actually completing successfully. The job only has acouple steps. Step 1 (DTSStep_DTSExecuteSQLTask_1) Execute SQL Task,runs a stored procedure to export blobs (pdf files) out of sql serverand onto the local machine.Here is the code in the storedprocedure called sp_PDFExportCREATE PROCEDURE [dbo].[sp_PDFExport] ASbeginset quoted_identifier offdeclare @pk intdeclare @where_clause varchar(100)declare @file_name varchar (50)declare @debug varchar (50)Declare @cmd varchar (50)--debug/*if @Debug = 1print @cmdexec Master..xp_cmdShell @cmd */-- begin cursorDECLARE LOOKUP CURSOR FOR select pr.[id]from plan_report pr, plan_version pvwhere pv.plan_id = pr.plan_id and pv.status = '30' and pr.create_time
Say I want to lookup a value in another dataset, but there is a grouping that requires you to know what the values for each level is in order to get to the correct detail record. Can you still use the lookup function with more than one field to compare against? So for example
Department \___SalesPerson \___Measure
I want to be able to add a new row at the Measure level, but lookup each field from another dataset. In order to do that I will need the Department AND SalesPerson values to do the lookup, but I dont think the Lookup function will let us do that will.
We have a job created by the maintenance job wizard that backs up the transaction logs for all of our databases on an hourly basis. At random intervals, one of the tranaction log backups will fail with the following message in the job history: sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
The next scheduled transaction log backup will run fine the next hour. The sqlmaint.exe is present and executable. There are no additional messages in the SQL Server error log or SQL Agent error log. Any ideas what causes this random failure?
I could really use some assistance. I have been researching this problem for over a month now and I have not made any headway or progress.
I am running SQL Server 2000 on Windows 2000 Server. Hardware is Dual Xeon 2.4/400 Procs, 2GB ram and 1 Raid10 Array with 4x 36 GB 10K RPM drives.
The server has about 50 dbs on it. All are primarly used in conjuction with some web application or site. On average the server sees about 270ish connections/sessions.
About 1 - 2 months ago, we started seeing random login failures. We have no explanation for these failures. Our coldfusion code gives us detailed logging information regarding the exact statement that was being executed when the login failed. We try to reproduce the failed login, we cannot. There are no misspelling, code inconsitencies in this regards because the logins are set in the data source which verify.
We are using per-processor licensing, so unless there is a hidden limit we are hitting or MS is lying about per-processor licensing having unlimmited connections, that is not the issue. Also, I've ruled out some kind of network issue because if that were the case, the login would have timed out, as opposed to failing. I've been running a trace and viewing the failed logins.
/* This SP has 2 functions. a) if @method='duration' gives the average run duration in minutes for successful jobs b) if @method='failures' displays failures/cancels/still executing jobs It defaults to today's date. Specify @xdate for a different date -- Louis Nguyen */
CREATE PROCEDURE UtilityJobsHistory ( @method varchar(100)='duration' ,@xdate datetime=null ) AS set nocount on set transaction isolation level read uncommitted
if @method='duration' begin
select @xdate=isnull(@xdate,getdate())
/*run_duration is in HHMMSS format; drop SS*/ /*run_staus: 1 complete 2 retry*/ /*step_id: 0 is final job outcome*/ /*run_date: yyyymmdd format*/
/*today's performance*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #today from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('1','2') and step_id=0 and run_date =convert(varchar,@xdate,112) group by a.name
/*7 day average performance*/ /*populate #D with dates in yyyymmdd format*/ create table #D (run_date varchar(50)) declare @idate datetime set @idate=@xdate while @idate>dateadd(day,-7,@xdate) begin insert into #D select run_date=convert(varchar,@idate,112) select @idate=dateadd(day,-1,@idate) end
/*Avg7Days*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #avg7Days from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id join #D as c on b.run_date = c.run_date where run_status in ('1','2') and step_id=0 group by a.name
/*output*/ select name=cast(a.name as varchar(35)),OneDayAvg=a.minutes,SevenDayAvg=b.minutes from #today as a join #avg7days as b on a.name=b.name order by a.name
return end
if @method='failures' begin
select @xdate=isnull(@xdate,getdate())
select status=case run_status when 0 then 'FAILED' when 3 then 'CANCELED' when 4 then 'EXECUTING' end ,name=cast(a.name as varchar(35)),step_name ,time=replace(convert(varchar,@xdate,107),' ','')+' '+right('0000'+cast(b.run_time/100 as varchar),4) ,b.message from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('0','3','4') and run_date=convert(varchar,@xdate,112) order by run_status,a.name
I've installed the SP2 on three dev/test servers. I went to install on production and the install failed on the Nofication and Client updates.
The error message was -
MSP Error: 29549 Failed to install and configure assemblies C:Program FilesMicrosoft SQL Server90NotificationServices9.0.242Binmicrosoft.sqlserver.notificationservices.dll in the COM+ catalog. Error: -2146233087 Error message: Unknown error 0x80131501 Error description: The transaction has aborted.
MSP Error: 29549 Failed to install and configure assemblies C:Program FilesMicrosoft SQL Server90NotificationServices9.0.242Binmicrosoft.sqlserver.notificationservices.dll in the COM+ catalog. Error: -2146233087 Error message: Unknown error 0x80131501 Error description: The transaction has aborted.
I have a package that loops through a list of servers. I keep the list in a table, read it, then loop through it and dynamically set the ServerName in the Connection Manager so that I can connect to one server after another without having to set up separate Connection Managers for each. A Data Flow task queries each server for configuration information and writes that to a central database. Everything works well, unless a server is offline for some reason.
As long as it doesn't exceed the max number of errors, the package logs the error, skips over that server and continues along just fine. What I'd like to do is trap that error and manually write a row to the central database with the server name and an error message, so that at least all the servers show up in the report, even if they don't all have configuration data listed.
How do I handle this type of connection error? Everything I've seen on error handling either assumes it's a data error or that you want to log the error in some external log file. I want to execute a SQL script that writes the value of a variable (the server name) to a table.
We are using HTTPS anonymous merge subscriptions....
Sometimes when trying to synchonise, we will get the following error messages returned to the subscriber....
The upload message to be sent to Publisher '**thewebserver**' is being generated The merge process is using Exchange ID '0F65CFCB-AF17-47DC-8D98-493A44C243E0' for this web synchronization session. The Merge Agent could not connect to the URL 'https://**thewebserver**/client/replisapi.dll' during Web synchronization. Please verify that the URL, Internet login credentials and proxy server settings are correct and that the Web server is reachable. The Merge Agent could not connect to the URL 'https://**thewebserver**/client/replisapi.dll' during Web synchronization. Please verify that the URL, Internet login credentials and proxy server settings are correct and that the Web server is reachable. The Merge Agent received the following error status and message from the Internet Information Services (IIS) server during Web synchronization: [401 :'Unauthorized']. When troubleshooting, ensure that the Web synchronization settings for the subscription are correct, and increase the internet timeout setting at the Subscriber and the connection timeout at the Web server.
If I then go to a web brower, put in the HTTPS address, it brings up the logon dialog - I put in the admin username and password to confirm the connection and that's fine.
We try and synchronise again, and this time it works - it's as though I have 'woken' it up again and it's happy to play.
Is increasing the timeouts as suggested by the error message the way to go ? If so, where does one set the 'internet timeout setting at the subscriber', and the 'connection timeout at the webserver' ?
I'm debugging my first SSIS package and is getting inconsistent results. The package does not always complete successfully. When the package does fail, it fails at different tasks that accquire database connections. Any of the following error message would show up: [Execute SQL Task] Error: Failed to acquire connection "FORGE.FMC". Connection may not be configured correctly or you may not have the right permissions on this connection.
[OLE DB Destination [6374]] Error: The AcquireConnection method call to the connection manager "FORGE.FMC" failed with error code 0xC0202009.
[Connection manager "FORGE.FMC"] Error: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Communication link failure". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Named Pipes Provider: No process is on the other end of the pipe. ".
I never experienced any connection error when executing query through management studio. It's only SSIS packages that fails to connect every now and then. Any help is appreciated.
Actually this is in regard to SCD Type 2 Dimension, Scenario is like that I am moving Fact table from some old source and I have dimensionA description value in fact which I want to replace with appropriate id from Dimension Table and that Dimension table is SCD Type 2 based on StartDate and EndDate and Fact Table doesn't contains direct date value rather there is timeId in Fact so to update the value in Fact table I have to Join Time Dimension table and other Dimension Table to replace fact Description with proper Id.
I am doing a lookup that requires mapping 2 columns in the column mapping section. When I do this, I get the error "Row yielded no match during lookup" . The SQL that I captured in SQL profiler does find the record when I run it in Management Studio. I have already tried trimming everything to no avail.
Why is this happening?
I tried enabling memory restrictions but then I my package hangs and I get a SQLDUMPER_ERRORLOG.log file with the following logged:
I am getting occasional failures of a SQL Server 7.0 complete backup to disk on a production database. The errors seem to indicate that another process has the disk file open at the time of the backup. The errors contain the following texts : -
'Cannot open backup device' 'Operating System Error=32 Process cannot access file because it is being used by another process'.
The only other process that should access the disk file is an ARCserveIT scheduled job to copy the disk backup to tape but this is completing long before.
We need to set up an email alert to activate when the ODBC connection fails to link the database to the application.Is it possible?We ahve the SQL mail working already.What shall we do to create such an alert? Thanks!
We have been seeing random inexplicable communication link failures when communicating with a Win2K SQL server for a while now. After a very detailed analysis of the various causes of the problem (network, name lookups, etc.), we've narrowed it down to possibly the ODBC driver. We are using TCP/IP.
I've stuck a packet sniffer on the connection between the SQL server and the client and in almost all cases, the connection suddenly terminates with the client sending a TCP reset to the server.
Looking at the packet traces further, it seems like in about 60% of the cases, there is period of activity on the TCP connection, then some inactivity during which there is a constant stream of TCP keepalives between the client and server and then suddenly the client resets the TCP connection.
Now, we can usually correlate this TCP reset to some new activity initiated on the client application, so could this be related to connection pooling in the ODBC? Thats the only inference I can obtain.
We are running Win2K SP3a on the server.
Any ideas on what else to look for or how to debug this further? I have 10GB of packet traces and can provide more details on the connection traces if necessary. The problem also is that we have ~100 clients constantly communicating with the SQL server and we will see anywhere from 10-20 random CLFs in a day.
I've searched the archives extensively and this does seem to be a problem for many people, but a few of them seem to have had genuine network problems and we've pretty much ruled that out since there are other simultaneous TCP connections between the client and the server and they seem to be okay.
I need to have a reliable alerting system for my merge replications I have setup on my MSSQL 2005 server.
The problem is that the build-in alert system the 'agent failure' alert only triggers when 'all databases' for that specific alert is selected. There is no failure alert triggered when I select a database and force a failure. The 'agent succes' alert does never get triggered at all.
I need an reliable succes and failure alert per database because I need to do specific actions per database.
Can someone help me out here?
I'm thinking of building my own alerting system if the help here is insufficient. In that case I need to know in what tables to look. Maybe someone can give me some pointers?
Hi All, I am placing a Matrix inside the table control for grouping requirements,but when we export the report to the Excel, the contents inside the table cell are ignored. Is there any way to get the full report exported, as per the Requirement.Please help me with this issue.
Sorry if this is a real simple question, but I just had a SQL 6.5 server dropped in my lap. I need to transfer all the data to a new box, which I did using the Tools mneu (Database/Object Transfer) in Enterprise Manager. I checked the databases after the transfer and all the data seems to be there, including the logins. However, if I try to connect the database, all logins fail.
Connection failed. SQL State '28000' SQL Server Error 4002 [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed
Does any body know how to fix this easily without resetting every single user id?
This is getting extremely annoying. I cannot unistall another instance of SQL Server Express 2005.
I have had three different servers with the separate installs of SQL Server Express 2005. I remove the product (Backup Exec, Dell IT Assistant, Microsoft System Center 2007). I uninstall the third-party product and they leave behind SQL Server Express 2005. For what idiotic reason I do not know. Then I try and remove SQL Server Express 2005 and for the third straight time it fails.
Here's the latest failure error, "TITLE: Microsoft SQL Server 2005 Setup ------------------------------ The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.
For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&EvtSrc=setup.rll&EvtID=1603&EvtType=sqlca%5csqlcax.cpp%40SetInstanceProperty%40SetInstanceProperty%40x534 ------------------------------ BUTTONS: OK ------------------------------
Does anyone know how to actually remove SQL Server Express 2005? This is pathetic.
I am runing WinXP Pro SP2 with all current updates and also VS2005 Team Ed for Developers. VS2005 is installed on D drive as is nearly all of my development tools. SQL Server 2000 SP4 is on the C drive and just installed SQL 2005 express with advanced services to D drive. I then attempted to install the express toolkit BIDS to the D drive only to learn it's hard coded(really stupid to not check for existing VS 2005) to install on C drive only. I've gotten past the denenv.exe issue.
The issue now is when I open VS2005 with the normal shortcut or the Business Intelligence Development Studio short cut and open any project that contains Crystal Reports reports and attempt to open a report I get package load failures for ReportDesignerPackage and Datawarehouse VSIntegration Layer Package. Also get this same error if you try to now create a BIDS report project.
I thought maybe VS2005 has a search path variable in tools/options or maybe a system envirnoment variable that could be tweaked to tell VS2005 to also look in the IDE folder for the dummy VS install on the C Drive. If there is I have not discovered it yet.
Second thought was to copy the files in the IDE folder of the dummy VS install on C drive to the IDE folder where my VS2005 is actually installed. I saw a post last night by someone that had done that with apparent success. That solution seems a little suspect since the BIDS packages files are registered at the C drive paths, so you certainly don't want to delete or move those files from where they were installed.
I'm nervous about side effects on my existing VS2005 projects during development and deployment and aren't even using BIDS.
So, now the question is how does one resolve this conumdrum?
We are seeing login failures for windows accounts. Below is the error message.
Description: In our env most logins are windows accounts. Initially we thought it is an UAC issue and we tried to launch the SSMS using "Run as Administrator". However, we are seeing login failures.
Enviroment: Microsoft SQL Server 2014 - 12.0.2402.0 (X64) RTM Enterprise Edition (HyperVisor)
Error Message in Error Log :
2015-08-10 22:36:45.290 Logon Error: 18456, Severity: 14, State: 11. 2015-08-10 22:36:45.290 Logon Login failed for user 'domainloginname'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 10.xxx.xxx.xxx] 2015-08-10 22:41:23.470 Logon Error: 18456, Severity: 14, State: 11. 2015-08-10 22:41:23.470 Logon Login failed for user 'domainloginname'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 10.xxx.xxx.xxx]
Troubleshooting done: - Recreated the windows login in sql server. Doesn't work. - ran sp_valdidatelogins. it doesn't return any rows. - I belong to sysadmin role and when I say, getting below error message.
xp_logininfo 'domainloginname' /* Msg 15404, Level 16, State 19, Procedure xp_logininfo, Line 64 Could not obtain information about Windows NT group/user 'domainloginname', error code 0x5. */
We tried dropping this account and re-creating the windows account with same permissions but still result is same.It throws same error message. Login failure message !!!