I was able to successfully create a database maintenance plan for SQL Server 2000 Transaction Log Shipping for a few databases a few weeks ago. Yesterday, I created a few more but to my surprise, I can no longer do it. I can create a maintenance plan but the job it creates does not start even if I force the job to start. I did exactly the same thing as what I did (as I document everything I do) before but no luck.
I'm seeing some strange behavior from a stored procedure of mine. It essentially grabs a bunch of rows using a fairly simple JOIN....here's the from statement:
Code Snippet FROM Payment PY (NOLOCK) JOIN (SELECT DISTINCT PY.AccountPaymentId, ROW_NUMBER() OVER(ORDER BY PY.AccountPaymentId ASC) AS RowNum FROM Payment PY (NOLOCK)) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((@Page * @PageSize) - @PageSize) + 1) AND ((@Page * @PageSize) - @PageSize) + @PageSize and ((@PayeeName IS NULL) OR (PT.[Name] LIKE '%' + @PayeeName + '%')) AND ((@AccountId IS NULL) OR (AC.AccountId = @AccountId)) AND ((@DistributionId IS NULL) OR (DS.DistributionId = @DistributionId)) AND ((@PaymentDate IS NULL) OR (DATEADD(day, DATEDIFF(day, 0, PY.PaymentDate), 0) = DATEADD(day, DATEDIFF(day, 0, @PaymentDate), 0))) -- Ignores the time AND ((@PaymentNumber IS NULL) OR (PY.AccountPaymentId = @PaymentNumber)) AND ((@IsReconciled IS NULL) OR (PY.ReconciledInd = @IsReconciled)) AND ((@AmountIssued IS NULL) OR (PY.PaymentAmount = @AmountIssued)) AND ((@AmountPaid IS NULL) OR (PY.AccountPaidAmount = @AmountPaid)) AND ((@IssueStatus IS NULL) OR (PY.PaymentStatusEnumItemId = @IssueStatus)) AND ((@AccountStatus IS NULL) OR (PY.AccountStatusEnumItemId = @AccountStatus)) AND ((@IsReissued IS NULL) OR (PY.ReissuedInd = @IsReissued)) ORDER BY AccountPaymentID ASC
When I pass a 1 for the @IsReconciled parameter, I get the right number of rows back - 9779. But when I pass a 0 (zero), i get no rows back, although there are 222 rows which satisfy the condition.
Is there somethig I'm overlooking (I don't think I am...)? I don't know whay 1 works and 0 wouldn't...
FYI - the @IsReconciled parameter is set to NULL at the outset of the procedure -
We have an interesting problem. We are attempting to migrate from sql 2000 to sql 2005. the schema we have is exactly the same. the new 2005 box is more powerful than our 2000 box.
here is our schema:
tbl_Items ItemID int pk ReferenceID int sessionid varchar(255) StatusID int
tbl_ItemsStatus statusid int pk isinternalstatus bit
there is an index on (ReferenceID, SessionID, StatusID) and (SessionID, StatusID)
this is the query:
DECLARE @referenceid INTEGER SET @referenceid = 1019
SELECT MAX(i2.itemid) FROM tbl_Items i2 (NOLOCK) JOIN tbl_ItemsStatus s (NOLOCK) ON i2.StatusID = s.StatusID WHERE s.IsInternalStatus = 0 AND i2.referenceid = @referenceid AND i2.sessionid IN ( SELECT i3.sessionid FROM tbl_Items i3 (NOLOCK) WHERE i3.referenceid = @referenceid AND i3.status <> 7 AND i3.status <> 8 AND i3.status <> 10 AND i3.itemid IN ( SELECT max(i4.itemid) FROM tbl_Items i4 (NOLOCK) WHERE i4.referenceid = @referenceid GROUP BY i4.sessionid ) AND i3.itemid NOT IN ( SELECT MAX(i7.itemid ) FROM tbl_Items i7 (NOLOCK) WHERE i7.referenceid = @referenceid AND i7.SessionID IN ( SELECT i5.SessionID FROM tbl_Items i5 (NOLOCK) WHERE i5.status <> 11 AND i5.referenceid = @referenceid AND i5.itemid IN ( SELECT MAX(i6.itemid) FROM tbl_Items i6 (NOLOCK) WHERE i6.referenceid = @referenceid AND i6.status IN (7,11,8) GROUP BY i6.sessionid ) ) GROUP BY i7.SessionID ) )
GROUP BY i2.sessionid
we know this query is pretty bad and can be optimized. however, if we run this query as is on 2005 it takes about 2 hours to run...if we run the exact same query on 2000 it takes 9 seconds.
so this query on 2005 if run takes 2 hours..however, if we omit the s.IsInternalStatus = 0 or the i2.referenceid = @referenceid line it takes about 9 seconds.
why would this be? it makes no sense why omitting one of those where clauses would increase the performance of the query by 2 hours? we know its a bad query...but this doesnt make sense.
Hi!I'm studying to have my MCSE 70-228 certification and I'm trying somethings with backing up transaction logs and shrinking it.Here's what I do:There is no activity in the database by the way.I have a transaction log of 1792 kb...I do the following command:BACKUP LOG TestDB TO TestDBBackupDBCC SHRINKFILE ('TestDB_Log',0)The transaction log is now 1280 kbI do the same command and finally my transaction log is now 1024kb...Any idea why it didn't shrink it at 1024 kb the first time?Thanks!Jeff
Guys, I have some data in an excel sheet. Some of the columns have a few NULL values for certain amount of rows till is gets data. What makes it so weird is that when priviewing this in the wizard, the whole column is filled with NULL values when the number of leading NULLs is quite large. When NULLs are quite a few, the column works fine!! Can anyone explains this? We tried some manual work to cut some of the rows from below and put them at the start and it worked! It's so strange though this behavior. Shiko
I had created 2 packages... one is the parent package and contains a 50 iterations loop running a secondary package for each iteration... i had reached the following conclusion:
My package takes an average of 5 seconds from the time it ends executing one iteration and starts another... after about 30 iterations... my average time between end and start increases significantly to about 12 secs or even more...
All packages have delay validation set to false, and receive several variables from the parent package... Has for the logging, it is done to files based on a path coming from a variable in the parent package.
To execute the parent package i am using dtexecui.exe and i consider this behavior rather strange... had anyone experienced the same? Can anyone test this?
My environment is a 4 x64 processors with 8gb memory, so i guess its good enough to get 0 secs from end to start
I have a script component in a data flow that is exhibiting some strange behavior. In the PreExecute event of the data flow, I stuff a recordset into a variable that is declared at the data flow scope. Within the data flow, I use a script component to read in the data from the recordset.
Example:
Dim olead As New Data.OleDb.OleDbDataAdapter
Dim dt1 As New System.Data.DataTable
Dim row As System.Data.DataRow
olead.Fill(dt1, Me.Variables.rsIntRateStrata)
If I display the count of the records in the data table dt1, it shows 42 rows, which is correct. Run the package, everything runs as expected. So far, so good.
Now, I set up another source/destination within the same data flow, as well as a script component between them, same as the first flow described above. Now my data flow has two parallel flows (different source & destinations). I copy the same script logic from the first flow into the second. Run the package- no errors, everything is fine... except when I inspect the data, it looks like the transformation isn't working correctly in the second script.
So I display a messagebox of each script component during run time. The first component displays 42 records, while the second displays 0 records? Same variable. Same data flow.
So I delete the first (original) flow from my data flow. Run the package again. Now the messagebox says 42.
What is happening here? Do I have to create two variables to duplicate the same recordset if I need to use it multiple times within the same data flow? Is this a bug?
BEGIN declare @datefin_flag datetime, @strip datetime SELECT @strip = dateadd(d,datediff(d,0,getdate()),0) SELECT @datefin_flag = DATE_FIN_PERIODE_FISCALE FROM DM_LKP_CALENDRIER_PERIODE_F WHERE DATE_DEBUT_PERIODE_FISCALE < @strip AND DATE_FIN_PERIODE_FISCALE = @strip --select @datefin_flag --select @strip IF(@datefin_flag != @strip) RAISERROR('You cant run this',16,1) END
Well this Query should return the raiserror it returns completes successfuly since todays date is not the same as the date in the database. if you select @datefin_flag it returns NULL and if you select @strip it brings back todays date how can NULL be equal to to todays date assuming that todays date is equal to NULL. ?
Hi,anyone could explain this : I have installed the same asp.net application on 2 distinct hosts, each with its own SQL Server 2000 database.However, in some SQL queries in Stored Procedures, the results do not come out the same.For example, on one server the query "select @value = (select ....)-(select ...)" works and on the other one it returns a null valueAny hint ?Thanks for your helpJohann
Execute following T-SQL within Queary Analyzer of SQL Server 2000:=======================================DECLARE @dTest DATETIMESET @dTest='2001-1-1 1:1:1:991'SELECT @dTestSET @dTest='2001-1-1 1:1:1:997'SELECT @dTestSET @dTest='2001-1-1 1:1:1:999'SELECT @dTest=======================================You get what?This is my result which is weird:2001-01-01 01:01:01.9902001-01-01 01:01:01.9972001-01-01 01:01:02.000Then what's the reason of this weird problem?
We are in the process of upgrading from SQL Server 2000 to 2005. During testing we came across the following situation.
To reproduce the issue you can do the following
Create this structure in a 2000 and 2005 server instances
CREATE TABLE test (a int, b varchar(30))
INSERT INTO test (a, b) VALUES (1, '2')
INSERT INTO test (a, b) VALUES (3, '3a')
INSERT INTO test (a, b) VALUES (4, '4')
Then run the following statement in both servers:
UPDATE test SET a = ltrim(rtrim(b)) WHERE b NOT LIKE '%a%' AND ltrim(rtrim(b)) <> a
In 2000 this last statement will execute with no problem and it will update one row, whereas in 2005 the following error message is given:
Msg 245, Level 16, State 1, Line 20 Conversion failed when converting the varchar value '3a' to data type int.
By looking at the execution plan it seems that 2005 first tries to evaluate ltrim(rtrim(b)) <> a and then excludes those rows containing a whereas 2000 first excludes those rows containing a and then evaluates the different than condition.
I know fixing this instance itself is easy but I€™m more concerned about having to rewrite many more stored procedures where we find this same scenario; is there any setting that can be changed to avoid this?
Hi I've got a sql server 2000 database that when running is runnign fine. About 9 months ago I altered one of the stored procedures and ever since then when the machine is rebooted the stored procedure is "reverted" back to the old sproc... ??? is there any way I can recrete a sproc in a job that runs every day?? why would it be doing this?
A stored procedure takes an IN parameter, an INOUT parameter, and returns an OUT parameter. When this stored procedure is defined in SQL Server 2000, the JDBC DatabaseMetadata method getProcedureColumns() returns three rows in the resultset:
one for the IN parameter (COLUMN_TYPE=1) one for the INOUT parameter (COLUMN_TYPE=2), and one for OUT parameter (COLUMN_TYPE=5).However, when the same stored procedure is defined in SQL Server 2005 (SP2), the getProcedureColumns() method returns only two rows in the resultset:
one for the IN parameter (COLUMN_TYPE=1) one for the INOUT parameter (COLUMN_TYPE=2).No row for the OUT parameter is returned.
jTDS JDBC driver was used for both of the above tests. When the Microsoft JDBC Driver for 2005 was used against SQL Server 2005, the same behavior (only two rows in the resultset) was observed.
Has someone else run into such a problem? Is this a bug in SQL Server 2005 because the same jTDS driver works as expected against SQL Server 2000 but not against SQL Server 2005? Any feedback will be appreciated.
Publisher is 2005 x64, subscribers SS2000 (SP3) and SS2005 x64. Pull agents, no filters on subscriptions. We are seeing many seemingly random conflicts on between SS2000 subscriber and publisher. It happens on several different tables.
One table is never editted, only inserts happening everywhere and deletes happening on the SS2000 subscriber. Deletes will sometimes generate conflict. Reason is '"he row was deleted at 'CTS11.CTS' but could not be deleted at 'cts4a.cts'. Unable to synchronize the row because the row was updated by a different process outside of replication." CTS11 is SS2000 subscriber, CTS4A is publisher.
Probably unrelated bug but when looking at conflicts on this same table in SS2005 conflict viewer, get error "ID is neither a DataColumn nor a DataRelation for table summary (System.dATA)" and then "Column ID does not belong to table summary (System.Data)". ID column is rowguid, only unusual thing about table is that it has varchar(8000) field plus some other fields.
Other tables generate conflicts with this reason "The row was updated at 'CTS11.CTS' but could not be updated at 'cts4a.cts'. The merge process was unable to synchronize the row." I enabled verbose logging in the merge agent but the log file didn't contain any further explanation.
This same topology and schema worked fine when all publishers and subscribers were SS2000.
Any insight into how to fix this would be appreciated.
I've set up a basic login page that grabs some data from sql server. I've set up an sql server user for the connection and it all works fine.... until now that is... I get an "System.Data.SqlClient.SqlException: SQL Server does not exist or access denied." error..
Here's the weird thing though - I'm storing my connection string in web.config. If I open up web.config and re-save it, my login system works again - for a while anyway. Then it goes back to not working. When you save a web.config file, is it compiled in any sort of way? because when I do save it and run my web page, there's that sort of delay the first time round like there is when you update an aspx page or a dll. ? Anyway - it's as if that process sorts out my problem but only for a an hour or so and then the problem happens again.
I'm not making any changes to the web.config, just re-saving it. And everything else works fine, for a while at least, then the sql error happens.
Hi, I just installed SQL Server 2005 EE. After installed it, I took a look at the program. However, I found out that SQL Server 2005 EE doesn't any program that you can create your database on. There nothing like Ms. Access where you can run the program to create access database file. SQL Server 2005 EE doesn't have something like that so that I will have to create the database file under Visual Studio 2005 applications such as VWD or VB.
Is this just me or is that the way SQL Server 2005 EE is?
I have an application which is a SQL Server 7 back end with VB 6 SP5 front end.
For some of my users, when the VB code calls a particular stored procedure, it causes VB to die without executing the stored procedure. I am connecting via ADO.
If I execute the same stored procedure via Query Analyzer on the same machines it works perfectly.
The same code also works perfectly on all the development machines.
The error only occurs for a particular stored procedure. Other stored procedures execute perfectly on the users' machines.
I have a database which takes up 12.5Gb of disk space yet when I drop all the objects out of it, it says that I still have 9.5 Gb of disk taken up by the database!
The database itself is nothing special. There are a couple of inefficient tables (char rather than varchar etc) and one that contains a column of datatype "image" but other than that it is pretty good.
if someone can point me to documentation on this I would appreciate it.....
If there isn't any....
I am wondering about the behavior of SQL Server for table scans. In other databases tables scans are not really table scans, they are scans of the underlying tablespace for all the rows that are in the table.....and if many tables are placed into the same tablespace then the obvious slowdown occurs as rows are scanned that are not in the table.
This used to be the case in server 7......but is it still the case in 2005 that if the explain says 'table scan' it will in fact scan the filegroup the table is in?
Some other databases also have a map of the row numbers and the table it is in and the optimiser decides whether to scan the data itself or to navigate through the map and fetch a row at a time depending on the stats....
It seems that the grahical explain does not tell me more than 'table scan'. Is there any way to see down to the physical level of what the optimiser is going to do?
We are having some problems with the SQL Server 2005 Keep Alive mechanism causing dropped connections. The Keep Alive time and interval appear to work as documented. However the number of retries made before dropping the connection seems to be variable. When running Network Monitor I have seen this vary between 3 and 9 retries. The value for TcpMaxDataRetransmissions in the registry is set to 5. Does anyone know what the correct number of retries should be for SQL Server 2005 Keep Alive? Is this dynamically determined or is this a bug?
I am struggling with creating a simple stored procedure, I want to take max(timekey) out in one variable @Val in a dynamic SQL statement as below in SP. When I create this SP, it creates successfully but when I run it I get error
Select @Val=MAX(TimeKey) FROM ABC
Msg 137, Level 15, State 1, Line 1
Must declare the scalar variable "@Val".
Can someone help me in understanding this wierd behavior? And how to get rid of this. Table and SP scripts are as below
CREATE TABLE ABC (TimeKey int, Data_Val int)
CREATE Procedure MAXVal(@Table_Name Varchar(30))
AS
BEGIN
DECLARE @Val INT,
@SQL Varchar(100)
SET @SQL='Select @Val=MAX(TimeKey) FROM '+''+@Table_Name+''
I have two SQL Server Instances on two servers. One server is my webserver and database server and the other one is just a database server. i have an application that calls a stored procedure located on the webserver/database server that runs a query on the OTHER database server. I use linked tables in my first instance to make the call possible.
Everything was working just fine for months until the database server was restarted and the IP address was changed. The name of the database is the same however and my first SQL Server instance has no problems running queries on the other databases tables. However, when you try to run the application i get the following error:
Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection
I have mixed mode authentication selected and my security uses the security context with username=sa and password=sa.
So here's the weird part.
The application will only run correctly when i manually run a SQL command from my webserver's SQL Analyzer on the linked SQL Server. however, after a few minutes, the same error comes back!! so as a temporary fix, i scheduled a dts job to run a simple query on the linked server every two minutes, so the application keeps working! It's almost as if the webserver's sql server forgot that the linked server is there, and by running a simple query in query analyer, the connection gets refreshed and everythings normal again - for about 3 minutes!
I am completely stumped by whats happeneing and appreciate any help. Thank you.
I'm a wee bit of a newbie concerning DTS and have inherited a db with a DTS containing a Copy SQL Server Objects task set to run nightly. Essentially, it does an informal backup of some core data.
Recently, I was notified that one of the tables it copies over is now empty on the Destination db. The DTS shows that it runs successfully with no errors logged, the table in question IS selected to be copied from the Source database, there IS data in the Source database table, and every other table in the Destination database is populated appropriately.
Any ideas on what would cause this one table to be empty without generating any errors?
Hi Guys,I have got a weird problem which I have never faced. I have a table which has 3.8 million records. I found this information from sysindexes since I could not find the count using the Count(*). After breaking my head for a while I tried to use the Top function to get the results and I did get them but till 630000 record not a record more than that. I tried DBCC CheckDB as well as DBCC Checktable with no success. I would really appreciate if anyone can solve this problem.
Here's a weird one.I'm running SQL Server 7 and when I run a backup something weirdhappens. When I perform the backup via Enterprise Manager by rightclicking on the database I want to backup, I click on OK but noprogress blocks show up in the window showing you the status of thebackup. The completion window pops up saying that the DB has beenbacked up. OK--Fine, maybe the backup is really quick. Then, throughExplorer, I look at the directory where the database backup is placed.It reads 0 KB - but - about a minute later the the size of the backupchanges to the correct size. Seems strange, wouldn't the "completed"window show up after the backup is out there??Well, to make matters a little more interesting, when I define thisone database in a maintenance plan, the plan will complete but nobackup is present! The log file shows it runs OK - and - when I runthe plan through query analyser it says OK, too. But, no backup ispresent.What gives?
Hi, This thread is a reformulation of a prior thread. I created a login 'Network service' at server level in Management Studio express.I use windows authentification.Then i defined an user for my database which is associated to login 'Network service', because the application asp.net uses that account (IIS 6.0). This user received db_read and db_write roles.This works.Now i experimented a little bit and i removed from the logins at server level the login 'Network service'.Result: the application still works..Then i removed the Builtinusers login from the login list at server level.Result: i get the error: "login failed for Network service".I recreated then the login 'Network Service' at server level but not the Builtinusers login.Result: it works again.My conclusion is: one of the two logins must be in the list: Network Service or BuiltinusersIs this right?Why do i get that error when both logins are removed and not only when Network Service is removed?Thanks
I can't understand why I get 2 different results on running with a Bracket I get 'NULL' and without a bracket I get the declared variable value which is 'Noname'
Below is Query 1:
Declare @testvar char(20) Set @testvar = 'noname' Select @testvar= pub_name FROM publishers WHERE pub_id= '999' Select @testvar
Out put of this query is 'Noname'
BUT when I type the same query in the following manner I get Null-------Please note that the only difference between this query below is I used brackets and Select in the Select@testvar statement
Declare @testvar char(20) Set @testvar = 'noname' Select @testvar=(Select pub_name FROM publishers WHERE pub_id= '999') Select @testvar
We've had Reporting Services running in a production environ. for 6 months fine, but from Saturday every report now causes the following error (in both the Report Manager and Soap calls):
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
Specified argument was out of the range of valid values. Parameter name: date
Now, before you jump to conclusions - this error is occurring on reports with both parameters and no parameters (ie in reports that have no "date" parameter in the report).
The next bit of info is the weird bit...
It was working on Friday (25/March/2006) - so as a test, i switched the servers clock back to Friday - and BINGO... it worked. Then I changed it to Saturday (26th March) and it doesnt work. In fact for the next 7 days - the service will not work until April 2nd 2006 - (when I changed the systems date to the 2nd it worked again.) Moving forward, it looks like its working fine.
Does anyone have any suggestions? This is in a production environment, so obviously changing the sytsem date as a quick fix workaround wont suffice.
This was originally posted on DBForums.com, so here is the link: http://www.dbforums.com/showthread.php?t=1614086
Since some of the Microsoft staff come around here occasionally, I figured I should at least link to it here. This is the gist of the problem, though. I was asked to come up with a script to create all required data directories in case an emergency was declared, and someone had to rebuild one of our database servers. Most of you are probably thinking of hitting up the sysaltfiles table about now, but this will turn into a cautionary tale. Try it if you dare. The one requirement is that you install the data for SQL Server in a non-standard directory that has a short path (such as C:MSSQL8, instead of the whole C:Program files...).
What I am unclear on is whether this is a problem in the reverse function, the r(l)trim function, or the fixed-width datatype. I have confirmed that transferring the data to a temp table did not eliminate the...oddity.
select filename from master..sysaltfiles where dbid = 2 go select reverse(rtrim(filename)), filename from sysaltfiles where dbid = 2 go select reverse(rtrim(filename)) from sysaltfiles where dbid = 2
I have also had two independent DBAs confirm this oddity exists, so this should be relatively easy to replicate.
Hello, i have a question that the sql server 2000 is install in window 2000 server. If i want to update to window 2003. Is that any problem in sql server 2000. I am worry about whether we will have problem after update. What i need to do? Many thanks.
I have three tables I am using, aspnet_Users, Stories, CustomizedStory. Stories and CustomizedStory are related via a foreign key StoryID. I’ve setup the tables so that when I delete a Story row it cascade deletes the corresponding row from CustomizedStory. Each CustomizedStory row has a reference to UserID from aspnet_Users. Since, I didn’t want to mess with the table definition by adding a cascade delete option on aspnet_Users, I decide to use a trigger, essentially delete all customized stories and associated stories if a user is deleted:ALTER TRIGGER [dbo].[DeleteCustomizedStories] ON [dbo].[aspnet_Users] FOR DELETEAS BEGIN DELETE FROM dbo.Story WHERE StoryID = (SELECT StoryID FROM dbo.CustomizedStory WHERE UserID = (SELECT UserID FROM deleted))END
The problem I am having is that it deletes all of the CustomizedStory rows as specified by the cascading option, but doesn’t delete the Story rows. I can’t seem to understand why this is happening, especially when I explicitly told it to delete story rows.
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?