I'd like to allow students in a lab to use the Express Manager and T-SQL to create their own databases. I think I've figured out just about everything - but the last step. Here's my thinking on how to do this:
1. Students are power users, not local admin
2. Created a login for the builtinpower users group as follows:
create login [BUILTINPower Users] from windows with default_database=[master]
go
exec master..sp_addsrvrolemember @loginame = N'BUILTINPower Users', @rolename = N'dbcreator'
go
3. Now students can start the Express Manager and connect to:
localhostsqlexpress using Windows Authentication
4. After they do, they can open and run a script creating a database and populating it with tables and records.
5. All good up to here... but when trying to access the new database using VB Express (file based, not a remote connection) access is denied.
The problem is in the NTFS file permissions being assigned to the new database mdf and ldf files. Local administrators and the builtin Network Service accounts have permissions, but the student who created the database gets nothing... So the student is denied access to the mdf and ldf files when trying to make a connection to them using VB Express.
I can go in with a different account having admin privileges are assign privileges to power users for the mdf/ldf files after they're created but this isn't what I need (students need to work whenever they want without waiting for an admin to fix the file permissions).
Maybe I'm going about this all wrong? If I have to I can insist on students always using VS 2005 instead of VB Express and use connections to a remote database (no problems doing this), but I'm still hoping for an express solution.
I'm new to this forum, so apologies if this is a dumb question (but even if it is - can anyone point me in the right direction?)
I must be missing something obvious. I'm not clear on how SQL Express can provide a decent development experience. Consider:
1. I have a class library that includes an express database file, set to always copy to output folder. 2. I have code that attaches to the database in the output folder at runtime.
As a result of the above, the first execution after restarting SQL Server will work. Subsequent builds will fail because SQL Server has a lock on the database file in the output folder. Or one of the two files will be copied (DB and log) and therefore they will be out of sync and detected as corrupt when the code attempts to connect. Either way, I have to restart SQL Server and rebuild before the DB connectivity will work again.
I figured I might need to detach from the DB in my shutdown code, but I cannot execute sp_detach_db from my code because it cannot be done through a connection to the database (database is in use). Moreover, I'm not sure I want to because the user might execute multiple instances of the application.
What am I missing here? Do I need to put steps in my build script to stop and start the local SQL Server instance?
I have web application and sql server 2k database in hosted environment. How can migrate my database to another server where I have SQL Server Express 2005.
I am looking for some assistance from the grand knowledgebase out there concerning the implementation of Sql Server Express 2005 on a client's strict domain environment.
I am designing and implementing a pos software that resides on registers and a server within a number of stores. The registers are running WePos and the server is running Server 2003. I run an instance of sql server express on all devices. The registers read the server when it can see it but when it cannot it reads the local instance. I am seeing a number of performance issues and I am trying to tweek the installation and coding of SSE on all devices.
Now that I've installed SQL 2005 Express Advanced, shouldn't there be a DB Environment I can work with? There must be a configuration mistake because I can't see the "container?" Prior to SP1 I had SQL 2005 Express installed and I could open and use SQL; however, once I loaded SP1 that stopped. I thought if I load the "Advanced" I would be "back in business."
I need to move an entire live environment from MS SQL 2000 (because of developing further before uppgrading) to SQL Express but cannot get it to work. Is this possible to do at all?
All-- Please help. I have deployed my site to my shared host and I am getting the following RTE... An attempt to attach an auto-named database for file... ..App_DataCommerceDB.mdf failed ...and I cannot find the issue. Note that I have given Full-Permissions on root folder for the IUSR account and for the ASPNET account. The full error message is below. Anyway, do you have an suggestions? My web hosting provider has advertised that they do support SQL Express 2005. What exactly do I need to ask them to do in order to get this working? Please advise. Thank you. --Mark Kamoski
Server Error in '/mkamoski/S16' Application. -------------------------------------------------------------------------------- An attempt to attach an auto-named database for file c:hostingwebhost4lifemembermkamoskiS16App_DataCommerceDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: An attempt to attach an auto-named database for file c:hostingwebhost4lifemembermkamoskiS16App_DataCommerceDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. Source Error: Line 88: ''' <returns></returns> Line 89: Public Shared Function GetCart() As ShoppingCart Line 90: Dim rdr As IDataReader = ShoppingCartProvider.Instance.CartGetCurrentCart(GetUserName()) Line 91: Dim cart As ShoppingCart = New ShoppingCart() Line 92: cart.Load(rdr) Source File: c:hostingwebhost4lifemembermkamoskiS16App_CodeBLLShoppingCartManager.vb Line: 90
I have an SSIS package and want to execute it on a computer that only has SQL Express installed on it. Since SQL Express didn't brought dtexec or dtui so what should I do ?
I have a web app that uses two SQL Express databases. One of the databases is the membership database. I am deploying the app to a hosted environment. Here is my connection string:
Could not open new database 'Akamojo_ASPNETDB'. CREATE DATABASE is aborted. Cannot attach the file 'e:hostingmemberakamojoTournamentLogApp_DataAkamojo_ASPNETDB.mdf' as database 'Akamojo_ASPNETDB'. File activation failure. The physical file name "c:inetpubwwwrootTournamentLogApp_DataAkamojo_ASPNETDB_log.LDF" may be incorrect. The log cannot be rebuilt when the primary file is read-only.
The mdf file is not read only. In the hosted environment the database is on the E drive. In my dev environment the database is on the C drive. From the error message, it appears that when it tries to build the log file it is trying to build it on the C drive in a path that doesn't exist in the hosted environment.
I have finished a change request from our client. I need to update clients' database with the one in developments.Here is the changes i made to database:Added/Changed some tablesAdded/Changed some stored proceduresAdded data to some dictionary tableThe data in clients' current database MUST be kept. So how can I merge the changed information to clients' database?
I need to connect to a SQL server thats running in say abc.trident.com and also sits in a shared environment..
I have couple of questions
1) That SQL server is accessible from my network, yet when I swtich on my enterprise manager I am unable to view that in the list of running SQL servers in the populated list.
Is it because its sitting in a shared environment I am unable to view that?
2) What is the connection striing I should use to connect to the server..When I try to configure a SQL server registrations it normally asks for SQL Server name along with the user authentication
Should I mention fill the Server: field as
abc.trident.com
or
// abc.trident.com/abc.trident.com
to connect to the clustered server.
Can some one tell me the proper connection string if both aforementioned ones are incorrect?
We are currently managing about 75 SQL Server instances. Each instance contains jobs (backups, index work, admin, etc) which report back to a central instance for monitoring. This is working well, but each time I need to change one of the jobs, I am having to log into each instance to do so.
I have recently played with Multi-server Environment, using a master and target, to see if this might help. There is relatively little written about Multi-server in books-online, and I am left with a question.
All jobs appear to be exact duplicates. But I don't want exact duplicates. For example, on the various target servers, I would like for the backup jobs to run at different times, and write to different directories. How are the rest of you working around this situation, if you are using Multi-server Environment?
We currently have a SQL 2000 server setup with the data and log files (including those for the system) placed on a SAN. Our current server is old and starting to have problems so we are transitioning to a new server. I just wanted to see if anyone has done this before or if my plan makes sense.
1) Hook the new server into the the SAN Arrays holding the data and log files.
2) Take the old Server off-line, and bring the new one up.
3) Point the Master db to the location of its data and log files.
4) Restart the server.
In theroy, since the master "knows" where all of the other databases are located when the it starts the recovery process it should bring up all of the other databases on that server. Does this make sense or am I missing something?
I am experiencing a situation where certain functions work perfectly when I run it on my local machine under MVS, but when I upload it to the server, then it does not?! Here is an example: I am using a drop down box (in a FormView) that is databound in an online submission form. When I run the application in MVS, then I can edit the records by opening the form, and selection the new value in the drop down list. The new value is then also saved to the database when I hit the update button. When I upload the code to the server, then the drop down list shows the correct information (so the databinding to the control seem to work correctly), but the new value is not saved to the database. Here is the code for the drop down list: "DropDownList1" runat="server" DataSourceID="SqlDataSource3"DataTextField="UserName" DataValueField="UserId" SelectedValue='<%# Bind("UserId") %>'CssClass="text">"SqlDataSource3" runat="server" ConnectionString="<%$ ConnectionStrings:LocalSqlServer %>"SelectCommand="SELECT [UserName], [UserId] FROM [vw_aspnet_Users] ORDER BY [UserName]"> Here is the code updating the database with the record (I have removed some records as well as the Insert and Delete parts): <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:tourism_connect1 %>"UpdateCommand="UPDATE Resorts SET typeid = @typeid, destrictid = @destrictid, UserId = @UserId, WHERE (resortid = @resortid)"OnInserted="SqlDataSource1_Inserted"> <UpdateParameters><asp:Parameter Name="typeid" /><asp:Parameter Name="destrictid" /><asp:Parameter Name="UserId" /><asp:Parameter Name="resortid" /></UpdateParameters></asp:SqlDataSource> What I can mention further, is that if I connect to the database that is on the live server (so the application runs in MVS on my local machine, but it then retrieves the info from the online database), then it also works fine. It is just giving this issue when the online application is trying to update the values. There is also no errors during the process. I will appreciate any advise on how to overcome this, as I really do not know where to look anymore... Thanks in advance! Regards Jan
My company uses a quad processor server connected to a SAN to load and summarize detail sales information from 2000 stores on a nightly basis. We poll and load around 5,000,000 rows of data each night. THis information is summarized up to various levels, then replicated to one or more secondary datamart servers for end user access via web reporting, BI tools like Proclarity/Analysis Services etc...
The initial data polling server is only touched by the development staff supporting the process (1-5 programmers) and is licensed for SQL server Enterprise using a CAL model. Each datamart server is licensed with MS SQL server processor licenses.
The question: We were told that the quad processor polling machine, which has no end user access allowed, must be licensed with processor licenses since it touchs the data ultimately consumed by end users. This makes no sense to me.
The Microsoft white papers discussing multi_tier environments don't seem to address this type of issue. They focus on applications that ultimately pass thru a data request to the SQL server machine. In this situation, user requests are handled by the datamart servers, which are licensed with processor SQL licenses.
I've been asked to look into the possibility of using SQL Server in a high availability environment. We have a few web based applications that use SQL Server back end DBs. What we are looking into is whether we can use multiple instances (on multiple physical servers) of SQL Server using some type of clustering/load balancing. I haven't worked with SQL Replication before, so I'm not even sure where to start in exploring the possible avenues we can explore.
Can anyone push me in the right direction? Any info would be greatly appreciated!
Hi all,I have posted the same topic under ASP-NET-Developers but had no reply;I then posted it under ASP.Net Community and had also no reply.Hence, i'm posting it here to prevent the impression of spam post ...I have created a SqlContextTrigger using ADO.NET 2.0 to create atrigger onSQL Server 2005.In the class, I create a file and specify its path to be in theApplicationDomain base directory such as:String Path = AppDomain.CurrentDomain.BaseDirectory.ToString()However, the path is always the SQL Server Binn Directory. I'd like totellthe program that I want the path to be related not to SQL Server BinnDirectory but to Visual Studio Solution's Project Bin Directory.How do I do that?Best regards
We have a combined network with some users on the old Windows NT 4.0 network and others on the new Windows 2003 servers using Active Directory. Will my users on the old WinNT4.0 domain have issues accessing / running reports from Report Manager?
I am looking at combining / consolidating some of our servers and was wondering if I can install SQL 2000 and SQL 2005 in a clustered Environment on a san. Cheers Angie
We are running 2 SQL Server and both run in failover clustered Environment. The Problem is now we need to Replicate a Database from one Virtual SQL Sever to the other.
The Second one (clusterd environment)is stroing their database localy while the First One (clustered environment)is storing database in a shared storage. Note that Both Server are used for sperate purpose , but we now need to set replication on the Other Clustered Setup for Reporting Purpose.
Will it work if we configure replication from One SQL Server Clustered Setup to the Other Clstered Setup. If yes, then please let me know how it can be done ?
I have looked all over the MS site and can find very little information regarding SQL Server's support for the VMWare environment. Based on other information that I have obtained on other forums and web-site it looks like it is not supported by Microsoft, but I would like the company line if possible.
I am new to database. I have a website in a hosting environment and it uses sql server 2005 databases. I want to have a complete backup of my database in addition to the hosting company's routine backup. I want to write a program to automatically backup everything every night. And of couse have the ability to restored the whole database if necessary. How should I do it? Are there any tools exist that I can use? What is the best way to deal with backup/restore in hosting environment? Thanks!
Yesterday, our server 'failed over' to the passive node in our clustered environment. We are running Windows NT, with MSSQL 7.0. What triggered the fail over, according to the logs, was an error in tempdb. Basically, the error said that tempdb could not be expanded, (could not create any more space for tempdb).
The drive that tempdb database is located on has more than enough room. The tempdb is set at 300M, and can grow to over 700M. The drive that tempdb is on has over 19Gig of space on it.
One other important piece of information is that the drive where our production database is on and the tempdb is quite full. The 19 gig that is left over represents only about 5 percent of the disk.
Can anyone shed some light on why the sql server would have failed over when trying to expand tempdb, when there was sufficient resources to handle that growth? Am I barking up the wrong tree? Could this be another error? When we restarted the server (both nodes and the disk array) the production database came back as suspect. Most of the stuff in BOL points to the space issue. The production database is around 37 gig.
I am creating a backup strategy for our non-production SQL Server 2005 data. We wish to utiliza the Master/Target Server scenario avaible in SQL Server 2005 with events directed to the Master Server. The question I have is what is the symptoms from the Target Servers if the Master Server fails and is unavailable for whatever reason there might be? I can change the event notification back to local server, but what other impact is there and are there any recommendations as to handle a Master Server outage so Maintenance plans can continue to execute as scheduled on Target Servers.
Need to set up a linked server in two node SQL Cluster pointing to a standalone SQL thru security setting "be made using current login's security context". It's double hop Kerberos setup. Cluster uses a domain account, so we manually set SPNs for that account with both instance name and protocol as well (set up these SPNs with SQL virtual name only).
Also, constrained delegation has been set to that stand alone SQL services (MSSQLSvc).
Both SQL cluster and standalone servers are in same domain, cluster service account is domain account and standalone SQL uses default SQL local service account.
Coming to SQL, when I create linked server, sometimes it lets me create without issues and sometime it throws this below warning and even if I create it won't work.
Login failed for user 'NT AUTHORITY/ANANOMOUS LOGIN'.
We have a suite of applications which go through the full product life cycle and I was wondering which SQL Server edition should be installed in every each environment from Proof of Concept -> Dev -> -> UAT -> Systest...
Production is straight forward but I am just not sure about all the other environments.
Greetings and warm thanks to those of you that have been so helpful on this forum.
We recently started using SQK 2005 SSIS. Using Business Intelligence Studio (Nice Job Microsoft!!) we created and tested packages on a developer workstation using a local instance of SQL Server.
Now we want to run these packages on the development server. You'd think this would be as easy as publishing a web page using ftp but we are stuck!
What pieces do you need on the server to run the packages and where do you find them? We have the super-duper MSDN premium, planning on purchasing SQL Server Enterprise Edition once we move our project into the production arena.
Thanks very much we are all scratching our heads today on this.
I have to spend an extra money on hardware if my PC hardware environment for installing SQL Server 2005 is not enough. Please tell me about it. It might be a RAM requirement? I don't know.
I am running into an issue trying to uninstall SQL Server 2005 64-bit. It hangs at the same point: Removing backup files. I then get a message that, "The setup has encountered an unexpected error while Completing Commit. The error is: The cluster resource cannot be moved to another group because other resources are dependent on it." The instance that I am trying to uninstall is in a cluster hosting two other SQL Server installations. I have tried a few combinations when attempting to uninstall: running the cluster group containing the server I am uninstalling on a different cluster node than all the other cluster groups; running them all on the same node. The only one I haven't tried is moving the Cluster Group containing the IP address of the cluster, cluster name, etc to another node while running the other groups on the other node. Has anyone run into this issue? Each of the nodes has been rebooted several times during these attempts and we are using Windows Server 2003 Clustering. Again, the hang happens at the same point every time. The exact error(s) from the even log are as follows:
Product: Microsoft SQL Server 2005 (64-bit) -- Error 29528. The setup has encountered an unexpected error while Completing Commit. The error is: The cluster resource cannot be moved to another group because other resources are dependent on it.
Product: Microsoft SQL Server 2005 (64-bit) -- Error 29530. Failed to set registry settings for server network libraries. The action is SetShilohRoot. The error is 2 (The system cannot find the file specified.
I have a new production server with about 100 jobs on it. These are ETL jobs about half of which are in SSIS packages and half call stored procedures directly. For those calling stored procedures directly, might there be a way to use the system catalog to link the schedule of the job to the table being updated by the procedure? The result is a list of tables (those updated by jobs) and the schedule of when they are updated.
SQL 2012 Standard VPS Windows 2012 Server Standard