How To Handle Streams By Mean Script Task?
Jun 29, 2006I'd need to create a file and then populate it. Any link/advice would be very appreciated.
Thanks in advance
I'd need to create a file and then populate it. Any link/advice would be very appreciated.
Thanks in advance
Say I have 10 tables that need to be loaded from one database to another. There is minimal transformation from source to target and it's pretty much a column to column mapping.
Would you make 10 separate packages, each one loading a specific table, or 1 package that loads all 10 at the sametime?
My inclination is to make 10 separate packages because there mostly will be foregin key constraints on some of the tables. And if I created 10 packages, I would make a 11th package (I guess some people call this the master package) that sequuences the run order for the 10 individual ones.
In this scenario, say I need to have a configuration file that sets the environment (dev vs prod). Would you make a confiuration file for each of the individual packages (this would be repetative, setting the same name and value 10 times) or is there a way to have a configuration for the master package where you would set the value just once?
I need to find a better way to handle DTS Job Failure issue. Currently, we have about 50 jobs which executed through DTS packages. Everytime when sources were not there or came in late, the DTS sent out an email to my page which I carried every day. Some those came in during the holidays even though I know the source party won’t generate the source files at that day.
Trying to avoid to get beep everytime when job was failed. Someone suggested that it is possible to add kind of executable file within the Custom task and let it trigger the DTS packages. If, for example, a holiday then don’t run the package so I won’t get the page.
Any hints suggestions would be greatly appreciated
J827
Hi All,
I need to send out email when error occurs in the package. Is it a good practice to put the send email task in the event handler? Then MaximumErrorCount is set to 1. But for some reason, some time I saw more than one email are sent out. Please advise. Thanks
I am trying to use System.IO.Compression to compress/decompress the buffer. I have the data in byte format in sql server and I need to create an CLR user defined function. I have everything running except that for some of the files, the compressed size is larger than the actual file size. I tried to use both Deflate and GZip compression types. Did anyone face the similar situation. Thanks in advance for your help.
Using BULK Insert with a format file I am receiving the following message:
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'STREAMS' reported an error. The provider did not give any information about the error.
The statement has been terminated.
I am running SQL Server 7.0 w/ SP1 applied. The same data file and format files work fine if I use bcp.
The data file contains fixed length records.
Any ideas what the problem is?
The following is a list of questions that I have not been able to obtain concrete answers. I am probably missing something:
1) ReadWriteVariables -- can the updated value for a ReadWriteVariable be accessed within the same data flow? It appears not as I think the PostExecute() fires at the completion of the data flow not the end of the Script Component. Secondarily, the Script Component is a non-blocking transformation so the component does not "see" the end of the pipeline prior to sending data down stream.
2) Record Count -- Because of #1 above, How could you calculate a record count for a data stream? It does not appear that one can calculate the number of records for a data stream within a data flow and then access the count from within the same data flow.
3) FinishOutputs() -- Is the concept of FinishOutputs() applicable to Script Component Destinations? Asked another way, is FinishOutputs() executed at the end of the data stream regardless of whether there are "real" outputs for the component? I can create a "Dummy" output to create FinishOutputs() but is this ok?
4) Script Component -- It appears that the Script Component Source, Transformation or Destination are really defined based on the columns defined in "Inputs and Outputs". Can you convert an Source script component to a transformation script component by simply adding an Output?
Sorry for these basic questions but I am not getting it completely. As you can tell...
More of a general "Streams" question than broker specific, but this is all being done in the context of Broker passing the messages around. The use of Streams & Encoding seems to be my problem and I'm not as familiar with Streams as I am with other areas of the Framework... Any advice would be appreciated.
At this point, I've created my own objects/stored procedures based loosely on the ServiceBrokerInterface class provided in the SQL Server samples. Some of this was done for simplification and as a learning exercise, but also to ensure that all of the SQL operations are being done via Stored Procedures and not inline SQL. This was done to adhere to our existing security policy used on this project.
In this "interface" I've built, I have a [BrokerMessage.cs] class which is meant to have a few additional pieces of functionality beyond what the MS provided version had supplied.
1st... A constructor for accepting either String or XmlDocument as the "content"
2nd... Methods to return either a XmlDocument or a simple String.
Since all of the Broker functionality is defined as using VARBINARY(MAX) in my stored procedures, I don't believe I have any problems at that level. It's simply a binary blob to Broker.
In my constructor for accepting String or XmlDocuments, I attempted to use the following...
public BrokerMessage(string type, XmlDocument contents)
{
m_type = type;
m_contents = new MemoryStream(EncodingProvider.GetBytes(contents.InnerXml));
}
My understanding was that MemoryStream is derived from Stream so I can implicitly cast it. The "EncodingProvider" is a static member set as follows:
public static Encoding EncodingProvider = Encoding.Unicode;
This way I ensure that internal & external code can all be set to use the same encoding and easily changed if necessary. I was hoping to avoid using Unicode since the rest of the application does not require it, but from my understanding all Xml documents in SQL Server are Unicode based, so this should be a better encoding choice for any processing that may potentially occur within SQL Server itself.
In my methods to return the various forms of the "Stream", I have the following code... The ToBytes() method is what is used to pass intot he stored procedure parameter that is defined as VarBinary and expecting a byte array. One area of concern is that the Read method accepts an INT for the length, but the actual Length property is a LONG. I'm sure there's a better way to handle this and I would welcome any advise there.
/// <summary>
/// Used to convert from a Stream back to a simple Byte array.
/// </summary>
/// <returns></returns>
public virtual byte[] ToBytes()
{
byte[] results = new byte[this.Contents.Length];
this.Contents.Read(results, 0, (int)this.Contents.Length);
return results;
}
/// <summary>
/// Used to convert from a Stream back to a simple String.
/// </summary>
/// <returns></returns>
public new string ToString()
{
byte[] buffer = this.ToBytes();
String results = EncodingProvider.GetString(buffer);
return results;
}
/// <summary>
/// Used to convert from a Stream back to a simple XmlDocument.
/// </summary>
/// <returns></returns>
public virtual XmlDocument ToXmlDocument()
{
XmlDocument results = new XmlDocument();
results.InnerText = this.ToString();
return results;
}
Hi
I'm using service broker and keep getting errors in the log even though everythig is working as expected
SQL Server 2005
Two databases
Two end points - 1 in each database
Two stored procedures:
SP1 is activated when a message enters the sending queue. it insert a new row in a table
SP2 is activated when a response is sent from the receiving queue. it cleans up the sending queue.
I have a table with an update trigger
In that trigger, if the updted row meets a certain condition a dialogue is created and a message is sent to the sending queue.
I know that SP1 and SP2 are behaving properly because i get the expected result.
Sp1 is inserteding the expected data in the table
SP2 is cleaning up the sending queue.
In the Sql Server log however i'm getting errors on both of the stored procs.
error #1
The activated proc <SP 1 Name> running on queue Applications.dbo.ffreceiverQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
error #2
The activated proc <SP 2 Name> running on queue ADAPT_APP.dbo.ffsenderQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
I would appreceiate anybody's help into why i'm getting this. have i set up the stored procs in correctly?
i can provide code of the stored procs if that helps.
thanks.
We have implemented our service broker architecture using conversation handle reuse per MS/Remus's recommendations. We have all of the sudden started receiving the conversation handle not found errors in the sql log every hour or so (which makes perfect sense considering the dialog timer is set for 1 hour). My question is...is this expected behavior when you have employed conversation recycling? Should you expect to see these messages pop up every hour, but the logic in the queuing proc says to retry after deleting from your conversation handle table so the messages is enqueued as expected?
Second question...i think i know why we were not receiving these errors before and wanted to confirm this theory as well. In the queuing proc I was not initializing the variable @Counter to 0 so when it came down to the retry logic it could not add 1 to null so was never entering that part of the code...I am guessing with this set up it would actually output the error to the application calling the queueing proc and NOT into the SQL error logs...is this a correct assumption?
I have attached an example of one of the queuing procs below:
Code Block
DECLARE @conversationHandle UNIQUEIDENTIFIER,
@err int,
@counter int,
@DialogTimeOut int,
@Message nvarchar(max),
@SendType int,
@ConversationID uniqueidentifier
select @Counter = 0 -- THIS PART VERY IMPORTANT LOL :)
select @DialogTimeOut = Value
from dbo.tConfiguration with (nolock)
where keyvalue = 'ConversationEndpoints' and subvalue = 'DeleteAfterSec'
WHILE (1=1)
BEGIN
-- Lookup the current SPIDs handle
SELECT @conversationHandle = [handle] FROM tConversationSPID with (nolock)
WHERE spid = @@SPID and messagetype = 'TestQueueMsg';
IF @conversationHandle IS NULL
BEGIN
BEGIN DIALOG CONVERSATION @conversationHandle
FROM SERVICE [InitiatorQueue_SER]
TO SERVICE 'ReceiveTestQueue_SER'
ON CONTRACT [TestQueueMsg_CON]
WITH ENCRYPTION = OFF;
BEGIN CONVERSATION TIMER ( @conversationHandle )
TIMEOUT = @DialogTimeOut
-- insert the conversation in the association table
INSERT INTO tConversationSPID
([spid], MessageType,[handle])
VALUES
(@@SPID, 'TestQueueMsg', @conversationHandle);
SEND ON CONVERSATION @conversationHandle
MESSAGE TYPE [TestQueueMsg] (@Message)
END
ELSE IF @conversationHandle IS NOT NULL
BEGIN
SEND ON CONVERSATION @conversationHandle
MESSAGE TYPE [TestQueueMsg] (@Message)
END
SELECT @err = @@ERROR;
-- if succeeded, exit the loop now
IF (@err = 0)
BREAK;
SELECT @counter = @counter + 1;
IF @counter > 10
BEGIN
-- Refer to http://msdn2.microsoft.com/en-us/library/ms164086.aspx for severity levels
EXEC spLogMessageQueue 20002, 8, 'Failed to SEND on a conversation for more than 10 times. Error %i.'
BREAK;
END
-- We tried on the said conversation, but failed
-- remove the record from the association table, then
-- let the loop try again
DELETE FROM tConversationSPID
WHERE [spid] = @@SPID;
SELECT @conversationHandle = NULL;
END;
OK. I give up and need help. Hopefully it's something minor ...
I have a dataflow which returns email addresses to a recordset.
I pass this recordset into a ForEachLoop configuring the enumerator as (Foreach ADO Enumerator). I also map the email address as a variable with index 0.
I then have a Execute SQL task which receives this email address as a varchar variable (parameter 0) which I then use in my SQL command to limit the rows returned. I have commented out the where clause and returned all rows regardless of email address to try to troubleshoot this problem. In either event, I then use a resultset to store the query result of type object and result name 0.
I then pass this resultset into a script variable to start parsing the sql rows returned as type object. ( I assume this is the correct way to do this from other prior posts ...).
The script appears to throw an exception at the following line. I assume it's because I'm either not passing in the values properly or the query doesn't return anything. However, I am certain the query works as it executes just fine at the command prompt.
Try
ds = CType(Dts.Variables("VP_EMAIL_RESULTS_RS").Value, DataSet)
My intent is to email the query results to each email address with the following type of data by passing the parsed data from the script to a send mail task. Email works fine and sends out messages but the content is empty. I pass the parsed data as string values to the messagesource and define the messagesourcetype as a variable in the mail task.
part number leadtime
x 5
y 9
....
Does anyone have any idea what I might be doing wrong?
thanks
John
Do I have other option beside using IF..ELSE IF? TIF
-- GET INFORMATION OF THE JOB
DECLARE @JOBIDAS Char(10)
DECLARE @VRUSERVICEHRSAS Decimal(18,2)
DECLARE @VRUSERVICEMINAS Decimal(18,2)
DECLARE @BILLEDFLATAS Decimal(18,2)
DECLARE @BILLREGRATEAS Decimal(18,2)
DECLARE @MIN_HRSAS Decimal(18,2)
DECLARE @COUNT_GREATER_MINTinyInt
DECLARE @COUNT_LESS_MINTinyInt
SET @VRUSERVICEMIN = 46
SET @BILLEDFLAT = 0
-- PROCESS ONLY RECORDS WHERE THERE IS NO FLAT FEE
IF @BILLEDFLAT = 0
BEGIN
IF @VRUSERVICEMIN BETWEEN 0 AND 15
BEGIN
SET @VRUSERVICEMIN = .25
END
ELSE IF @VRUSERVICEMIN = 15
BEGIN
SET @VRUSERVICEMIN = .25
END
ELSE IF @VRUSERVICEMIN BETWEEN 15 AND 30
BEGIN
SET @VRUSERVICEMIN = .5
END
ELSE IF @VRUSERVICEMIN = 30
BEGIN
SET @VRUSERVICEMIN = .5
END
ELSE IF @VRUSERVICEMIN BETWEEN 30 AND 45
BEGIN
SET @VRUSERVICEMIN = .75
END
ELSE IF @VRUSERVICEMIN = 45
BEGIN
SET @VRUSERVICEMIN = .75
END
ELSE IF @VRUSERVICEMIN > 45
BEGIN
SET @VRUSERVICEMIN = 1
END
END
PRINT @VRUSERVICEMIN
Is it possible to catch and error and then keep the process going in a stored procedure?
So if an update encounters a primary key violation on a row, is it possible to skip that row and keep the process going?
Hi! I have some try .. catch block trying to insert some data into database. During its action duplicate key row insert error could raise, for example. The question is how could I know distinguish it from other sql errors? Object ex (Catch ex As Exception) has only message property '{"Cannot insert duplicate key row in object 'dbo.Group_Courses' with unique index 'IX_Group_Courses'.The statement has been terminated."}' and type System.Data.SqlClient.SqlException. Knowing the type of error is not enough, because there are different SqlExceptions. Even the message is not unique for this error, because now i deal with 'dbo.Group_Courses' and then it could be other table. Is there something that unique identifies each error? For example error code. If it exists, where could I get it?
Thanks in advance!
I test my the now function and it is getting the right date. When I try to send that to the sql database I have it turns it into 1/1/1900. Does anyone know why this is happening, I have tried everything Here is my code:
sql = "Insert into tblguestbook(date, name, city, state, email, Url, Comments)Values ('"
sql = sql & Request.Form(Now) & "','"
sql = sql & Request.Form("nametxt") & "','"
sql = sql & Request.Form("citytxt") & "','"
sql = sql & Request.Form("statetxt") & "','"
sql = sql & Request.Form("emailtxt") & "','"
sql = sql & Request.Form("urltxt") & "','"
sql = sql & Request.Form("commentstxt") & "');"
Hi,
I have a table with one field set at nvarchar (4000)
This is sometimes not big enough and I get an error that the max row size has been reached.
How do people handle the error when the max row size is reached and gracefully inform the user?
Also, should I really be using Ntext instead, would this be better - is there a performance penalty?
Much obliged.
RG
Hi,
I would like to handle a sql error in t-sql and return a certain value in case error occurs. For example if I would like to add a record I want to return a certain identity value or maybe a status of transaction (0 for incomplete, 1 for succesfull trans).
If error occurs in sql I cannot return any values back to asp.net because of What I am doing at the moment is catching an error in asp.net and then displaying an error message. Is there a way to return only a return value to asp.net and somehow handle the error in t-sql?
Thanks
I don't know what's wrong with SQL2000 setup, the problem is:
whenever I execute a query with a date/time such as 02/02/200,
SQL give me an error message saying that the date is "Out of Range".
Any ideas, thanks in advance.
I'm having trouble with something and I was hoping that you could point me in the right direction.
Here's the scenario:
I have a VB application that clients use to add records to a SQK 2K DB. The info they have added that day is shown in a grid. They have the ability to edit items in the grid, and then update those changes to the database. The problem is that sometimes they change the values to something they shouldn't. To combat this I've started experimenting with check constraints. In query analyzer I test the constraint by trying to update an entry to an 'illegal' value. When I do this, I get an error saying: "Server: Msg 547, Level 16, State 1, Line 1" and the change is not made. What I'd like to do is to give the user a dialog box notifying him of the error. Is there a way to have a sub-routine or stored procedure be triggered when a message of this type is generated?
My specs are: W2K pro clients, SQL2K on an NT4 sp6a server. Application is written in VB6.
Any help is greatly appreciated.
thanks,
-scott
I have a select statement that works, but I know there has to be a better
way( I apologize for being TSQL brain dead today). Here is the statement
SELECT PATIENT_ACPT_STATUS, DISP_NOTES, RECORD_ID, modified_by, last_name, ddate
FROM PATIENT_MEDICATION_DISPERSAL_
Where (ddate = convert(char(10),getdate(),101) and
(MODIFIED_BY = CURRENT_USER) and
rec_status = 1) or
(ddate = convert(char(10),getdate(),101) and
(PATIENT_ACPT_STATUS = 1)) OR
(ddate = convert(char(10),getdate(),101) and
(MODIFIED_BY = 'open'))
Hi All,
We have got a new HP server and SQL is preinstalled, But when we are trying to start SQL, it gives an error "The handle is invalid".
We tried to reinstall the SQL, but , being a server t doesn't allow to do so
Please assist.
regards,
Jatin
I have a need when a Update, Insert or Delete is done to a record in DB "A", it will send the appropriate UID to a different table in different DB "B".
My first thought was to have a trigger on the table in DB "A" simply call a stored procedure on DB "B" and do the UID.
However - my question is what is the best approach and what's the best way to establish the connection to DB "B" for the UID from within DB "A"? We can't use linked servers - DNSLESS string would be the preferred connect way. Not sure how to execute it within a trigger.
Is even using a Trigger to Stored Proc the best way?
What about Transaction Replication - which I've never attempted - is that a better way?
Just looking for some guidance here so I don't needlessly burn time down a path that isn't recommended or simply won't work.
Thanks as always for your input,
Peter
Hi,
I build a local cube from a relation database. In the database there are 1:n relations.
Is there a way to handle 1:n relations?
For example:
I have a table LOGGEDFLAW and a table LOGGEDREASON with a 1:n relation between them. We create a select statement of these tables and as an result we get duplicate records of LOGGEDFLAW each time more than 1 record of LOGGEDREASON are associated to 1 record of LOGGEDFLAW - this is the standard result I get with an relational JOIN operation. Now I want to count the LOGGEDFLAWs without the duplicates generated by the 1:n relationship.
Best regards,
Thorsten
Hi All,
Please help me out how to implement the locking in below scenario
Req -
There are two tables Table1 & Table2
If I will insert in table1 then related data fields will be auto updated in table2 , similarly based on the data in table2 table1 data needs to be updated.
Now the sync of table1 & table2 is working fine.
My prob is we are handling the updation/insertion from the UI screens . Two separate screen for each table. When we have multiple user accessing the screens say - User1 updates table1 and User2 updates table2 then we need to implement the locking so that at one time one screen will allow updation in the table1 and hence table2.
The other screen shouldnt allow updation in table2 and hence in table1.
This is very common locking functionality ...but am not getting any way to implement it , Please advise.
Srain.
I have a friend who is doing a voting application for one of his customers and they are concerned about the volume that sql server can handle. He's looking a single sql server 2005 with plenty of hd space and 4gb of memory. The app will look to see if you voted and then insert a record accordingly.
Are there any papers out there or apps that can show the amount a server can handle?
thanks.
I already asked this question; however, I am giving all the detailsnow:We get large files(millions of records) and we need to load it into ourtables using import export wizard. Some of the fields in the file canbe Null and so we are forced to create table with fields that allowNulls with default ''. However when we insert data into these tablesit puts Null in those fields even though we have a default '' (I do notthink we have any work around for that; do we?)Finally we need to go through each field and update it to '' if it is aNull and that takes LOT OF TIME.If (select count (*) from <tablename> where <columname> is Null) >0BeginUpdate <tablename>set <columnName> = ''where <columnName> is NullendPlease let me know if there are any work arounds for this crisis ?Thank you very much in advance!
View 1 Replies View RelatedI need to know if there is a better way to construct this SQL statement.(Error handling is omitted)MS SQL Server 2000Insert into FSSUTmpSelect a.acct_no, a.ac_nm, a.ac_type, 10, -1,-1 * sum(CHARINDEX(convert(char(4), b.post_yr), @post_yr) * CHARINDEX('-',CONVERT(char(2), b.post_prd - @post_prd - 1)) * b.prd_trn_amt),-1 * sum(CHARINDEX(convert(char(4), b.post_yr), @post_yr) * CHARINDEX('0',CONVERT(char(1), b.post_prd)) * b.prd_trn_amt)FROM GLAccounts a, GLBalances bWHERE b.cmpny_cd = a.cmpny_cdAND b.acct_no = a.acct_noAND a.cmpny_cd = @cmpny_cdAND ac_ctrl_type between '200' and '219'Group by a.acct_no, a.ac_nm, a.ac_typeThe part I’m wondering about is the 2 sum sections.The GLBalances table has following important fields:Post_yr -- the posting yearPost_prd – the posting periodPrd_trn_amt – The beginning balances if the period is 0, or the nettransactions for periods 1 through 12.The first sum gives the current balance as of the period @post_prd by addingall of the periods from 0 to @post-prdThe second sum is just the beginning balance.It is doing a conditional sum by using CHARINDEX to be 0 if the recordshould not be added and 1 if it should.There is a problem as it stands when you are looking for the balances whenthe @post_prd is 9 or greater because “b.post_prd - @post_prd – 1” willbe –10 or smaller. Then the CONVERT(char(2) ….. is an error, so CHARINDEXis 0 when it needs to be 1.I can fix that by using SIGN and it will work fine. What I what to know, isthere a better way to populate the table, where one of the values is aconditional sum?This is a STORED PROCEDURE from a commercial product, so I can’t changeanything else other than the STORED PROCEDURE.
View 1 Replies View RelatedHi,What exactly does the above (subject) error mean? I'm getting it from anadp file when used by a few people at the same time (each user has thefile in their own filespace though). Access is through windowsauthentication and it only seemed to occur during an update of aspecific table.. The problem is it didn't happen to everyone and Ican't recreate it at all on my own, so am wondering if it was somethingto do with the level of traffic to/from the server at that specific time.Any clues?Cheers,Chris
View 4 Replies View RelatedDoes anyone knows of a tool that will help me manage my queries? I have 100's of them and all scattered. On my PC at home, some in my laptop, some of them at work, some in memory sticks. Then, when I need them I can't find the one I'm looking for so I end up writing the query again.
Any ideas?
Hello
I am developing a smart device application using Visual Studio 2005 and SQL Server Compact Edition.
Since we cannot write stored procedures, functions, views what is the best way to write/read to/from the database tables other than writing inline sql.
Any ideas and suggestions regarding this would be helpful.
Thanks
Here is the basic situation.
(1) I have 2 packages that run independently of eachother.
(2) Both are scheduled in SQLServer Agent Jobs as separate jobs.
(3) Job A is scheduled to run every 12 hours and Job B is scheduled to run every 10 minutes.
(4) However, I want to prevent Job B from running if Job A happens to be running.
(5) It is unknown exactly how long Job A will take to finish so I can't schedule Job B around it.
The way I wanted to approach this situation is as follows.
Within Job A's package, create a "marker" file when the package starts and delete it when the package finishes. So the existence of this marker file will tell Job B's package if it should run or not.
The concept is simple, but I'm not sure how to implement this.
For example, to create the marker file, I would use a File System Task, but I don't see an option in there to "create file". (However, I do see an option for "delete file".) Also, what task would I use in Job B's package to check if the marker file exists.
Lastly, If you have better approach, I would like to hear about it.
Thank you.
My way:
add column (boolean) to speicify whether is null or not in drived column component, i feel that's a little difficult
any better ideas? thanks
Using asp.net 2.0 and visual studio 2005. The question is regarding the following ER diagram: I've made Firstname, lastname, buildingID and RoomNum all required fields. I've got a modified GridView that displays all of the table Faculty columns. It's been modified so the BuildingID and DepID are resolved to their actual field names and displayed in a DropDownList. In the dropdown list I used for inserting (a seperate DetailsView control), I manually inserted an item into the Department dropdownlist which had the text "-- Select a Department --" with a value of -1. MS SQL didn't like that -1 value so I wrote the following code to fix it: protected void dsFaculty_Inserting(object sender, SqlDataSourceCommandEventArgs e)
{
if (e.Command.Parameters["@DeptID"].Value.ToString() == "-1")
{
e.Command.Parameters["@DeptID"].Value = null;
}
} That means of course DeptID is null, which is ok. The problem arises is when I try to edit that row in the GridView. I get the an error 'ddlDepartment' has a SelectedValue which is invalid because it does not exist in the list of items.Parameter name: valueIdeally, I'd like to make the dropdown list in the GridView show "-- None --" for the DeptID if it comes across a null value. I already tried playing around with the Command.Parameters in the dsFaculty_Selected function, but it didn't work. Ideas?