I have 2 aspx pages. one is "login.aspx" and the second is "test_connection.aspx". the "login.aspx" is using the membership class for my website's security. if u have have restarted your computer and you first load this "login.aspx", this will work fine and you will see that you can create a user. when you load (or view) next "test_connection.aspx" you will get this error message:
System.Data.SqlClient.SqlException: Cannot open user default database. Login failed.
Login failed for user 'sqluser1'.
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
If I will restart my computer and I will load first the "test_connection.aspx", this page will work just fine and if you will load next "login.aspx", i will get this error message:
Cannot open user default database. Login failed.
Login failed for user 'YECIAASPNET'.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.SqlClient.SqlException: Cannot open user default database. Login failed.
Login failed for user 'YECIAASPNET'.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
the page "test_connection.aspx" is using an sql username i have created. I am only using one database in this project and thats it ASPNET.mdf
what's happening? i cant understand. I dont know what im doing wrong. i am very new to this dotNET and MS-SQL. im not sure if this has relation to the other problem i have on the other thread (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=254568&SiteID=1).
i was working on a production server, and have stopped the sqlserver service along with the sql server agent. since i had to copy a MDF file . now i started the service again . but i find that there are no transactions happening ....what could be the reason.
Im facing one problem When more than one user tries to insert record through the application only one users data is getting inserted. Wat might be the problem?
This is the stored proc im using..
create proc [dbo].[GSK_insertregion] (@Country varchar(50),@Userid int,@RegionId varchar(50)) as begin insert into UserHierarchy(Country,Userid,RegionId) values(@Country,@Userid,@RegionId) end
I set up my package for logging to SQL Server. I set up a connection manager for the logging, but did not specify a database. Doesn't that mean that the logging should default to msdb.sysdtslog90?
But when I check the table, there's nothing in it, after I run the package.
Hi I have deployed a website on a server having Windows2000, IIS5.0 . It uses SQL Server 2000 which is on another remote server. While developing I used the visual tools in VS.net to make a connection and have used DYnamic properties of the connection object to map the connection string to the entry in to the config file. This works fine on my developement machine which has IIS and SQL Server 2000 on the same machine. The entry in the web.config for my connection string is:
value= " server=xxx.yyy.com; Trusted_Connection=yes;provider=SQLOLEDB.1;Initial Catalog=events; User id=myuser; Password=password;"
where xxx.yyy.com is the server running SQL Server2000.
I do not get any error but the conncetion doesnot happen and my datagrid doesnot get filled. The code for creating the connection is designer generated code. Any clues? -svp
Hello people, I don't expect anyone to know the answer to this but I guess we'll see huh?
I'm using Microsoft SQL Server Management Studio to do all my SQL stuff, and one of the tools that comes with this is a program called Microsoft SQL Profiler 2005. Anyways, so I'm using this profiler to capture SQL processes in the background while I work with the front end.
Here is a problem I'm facing... I am looking at this table right... and I see that when I do this certain process on the front end, it inserts 3 rows into the table. So I'm thinking "I know that on the tracer, I should be looking for an insert or a Stored Procedure with and insert in it."
So I use the tracer on the front end process and it shows me all the stored procedures that happen during the process of inserting these 3 rows into this table (btw this process is doing other things besides inserting stuff into this table, but it's what i'm currently working with at the moment.)
So what I do here at my job is I take this code, tweak it to work for our front end, and wah lah.... we're good to go. So in saying that, I make my own stored procedure and I all these stored procedures that are happening.
My result......
I get only 2 row inserts into the table.... the 1st row and the last row..... the middle row isn't inserted for some mysterious reason. I tried checking all the stored procedures for unique information pertaining to that specific row insert but to no avail I couldn't.
So my question to you guys is.... is there anything I'm overlooking that could possibly be inserting that row in the table? Thanks guys!
I have two problems I need some help with.First, I've just inherited a system and am delving into a few timeoutproblems that are causing problems for the users.Now, if I do a simple select * from the table (which looks to be thecause of the problem at this stage) in QA, I get the results back inless than a second. If I open the table in EM it takes about 10. Isthere a difference in viewing the data this way ? I'm used to EM beingvirtually the same speed. There is only one row. Minor questionreally, just something I'd like to understand if there is adifference.CREATE TABLE [QUERY] ([QUERY_ID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[CAT_ID] [numeric](18, 0) NOT NULL ,[QUERY_DESCR] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[USER_NAME] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[USER_ID] [int] NOT NULL ,[IND_EURO] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULLCONSTRAINT [DF_QUERY_IND_EURO] DEFAULT ('N'),[IND_DGCOLUMNS] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL CONSTRAINT [DF_QUERY_IND_DGCOLUMNS] DEFAULT ('N'),[NO_GROUPS] [int] NOT NULL CONSTRAINT [DF_QUERY_NO_GROUPS] DEFAULT(0),[NO_FIELDS] [int] NOT NULL CONSTRAINT [DF_QUERY_NO_FIELDS] DEFAULT(0),[NO_LINES] [int] NOT NULL CONSTRAINT [DF_QUERY_NO_LINES] DEFAULT (0),CONSTRAINT [PK_QUERY] PRIMARY KEY CLUSTERED([QUERY_ID]) WITH FILLFACTOR = 90 ON [PRIMARY] ,CONSTRAINT [FK_QUERY_QUERY_CATEGORY] FOREIGN KEY([CAT_ID]) REFERENCES [QUERY_CATEGORY] ([CAT_ID]) ON DELETE CASCADE ON UPDATE CASCADE) ON [PRIMARY]GOI don't think any re-indexing has been done on this (or the othertables in the db). I was wondering if constant adding/deleting rowscould cause the index to be massive and in need of a good clear out.Any pointers would be appreciated. From what I can tell, there wassome problems trying to get replication to work. I need to dig deeperto see if this is now correct.-------------------------Secondly, there is a another table in the same database.CREATE TABLE [FIELD_DATA] ([ID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[DATA_ID] [numeric](18, 0) NOT NULL ,[FIELD_ID] [numeric](18, 0) NULL ,[FIELD_CODE] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[FIELD_VALUE] [numeric](15, 5) NULL ,CONSTRAINT [PK_FIELDDATA] PRIMARY KEY CLUSTERED([ID]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]GOIt holds approx 4 million rows. The rest of the tables have minimaldata and about the same amount (consider them the same if you will).Now, another 'copy' of this database is held elsewhere (differentclient data) and this holds 40 million rows. The difference is thatthe first DB is 4.5GB and the second 6.5GB (approx). Does this provemy theory that re-indexing would be a good idea ?ThanksRyan
"The log file for database is full. Back up the transaction log forthe database to free up some log space."Now I only know this way to deal with that manually,Step1. in option , chance Recovery model from FULL to Simple.Step2: go to task to manually shrink the log fileStep3: Change recovery model back from simple to FULL.But by this way, I could get same problem again, the log file is fill,and need free up.Could you give an idea how to prevent this from happening? what andhow should I do???Thanks a lot in advance for your help.
Iam having 500 pages report,the report ia running for the file key ,I have reference some of the feilds to the reports header segment to display in every page,For every file key the report header feild also will change ,It is working for all file keys but the reference not occuring from body to report header when file key grouping changes to other grouping
Please let me know is there any possible way to do this one
I have configured an alert like below to track all blocked events in SQL Server across all databases and then kick start a sql job when a blocking happens which inserts data to a table, when there is a blocking in SQL server , i get an email  --which is working fine and i am able to track all queries.
but, HOW to get notifications ONLY if BLOCKING IS HAPPENING FOR MORE THAN 30 SECONDS OR 1 MINUTE with out using sp_configure?
---ALERT USE [msdb] GO EXEC msdb.dbo.sp_update_alert @name=N'Blocking Process', @message_id=0, @severity=0, @enabled=1,Â
i am making a query which select the data again a particuler date.
I insert values in the table for with current date(Today's date) and the records is inserted with the date format(2006-07-14 16:12:09),now when i run the query after 2 or 3 minutes to select the records inserted today, my query returns no results.
I think it is because of the the time (14:16 in this case) that after 2 minutes, the query looks for the records inserted at (2006-07-14 18:12 or 2006-07-14 19:12) and does not get the result.
Is there a method to not consider the time(14:16) when running the query but the query fetches the records including the records inserted at this time(14:16) no matter at what time I run the query today?
declare @error int, @rowcount int select @rowcount = COUNT(1) FROM STG_BCDR; while @rowcount > 0 begin  BEGIN TRAN Deletion
[code]....
Above code i try to delete records batch by batch to avoid table locking at BCDR table.total records in this BCDRÂ table is 40,000 records. Â However I run the code at execution plan, the BCDR table still clustered index scan which means that the locking still happend.
If i change the delete top (5000)...... to delete top (5).... then thre is clustered index seek, which is good..The problem here is  each time  only delete top 5 records which is means it will realy take very long time to remove those data.
how to cater the situation inorder for me to delete those huge data without table locking happend. If table locking happend , then other user will not be able to access this table at the same time.
Hi everyone, I use the forms authetification for my report manager and server. But, i have this problem : The Logon page running successfully but redirect to Folder.aspx not happening
To clarify:
>I get the logon page (UILogon.aspx)
>My user has been registered ok (i have checked the entry in the db to make sure it was created)
>I enter the login & password correctly and page posts back
The redirect never happens - In the browser, it never leaves the UILogon.aspx
Using Win2003, SQL Server 2005, Reporting Services