I know the default data path on a SQL7 server is defined in the
registry key SQLDataPath. I want to be able to determine the default
data path in a VB.NET application on the local machine and remote
machines. Is using the Registry Class the best way to do that or is
there a SQL command that can tell me? I have read about xp_regread but
I can't find it documented anywhere and I do not know what parameter
list it is expecting. I thought this path may be in an Information
Schema view, but I can't find it.
I found some dead-end threads regarding this issue but have not seen a clean solution.
Utilizing a Reporting Services security extension, I can't find a way of obtaining the reports name or identity from within the CheckAccess() method so I can use it to query an external database to ascertain authorization. In other words, my company requires the creation of custom roles that can be used from within our ASP application and well as being honored by SQL Reporting Services 2005.
If anyone can help, I would really appreciate your time!
Hi,I am trying to write a method which needs to call a stored procedure and then needs to get the response of the stored procedure back to the variable i declared in the method. private string GetFromCode(string strWebVersionFromCode, string strWebVersionString) { //call stored procedure } strWebVersionFromCode = GetFromCode(strFromCode, "web_version"); // is the var which will store the response.how should I do this?Please assist.
Hi, I just have a Dataset with my tables and thats it I have a grid view with several datas on it no problem to get the data or insert but as soon as I try to delete or update some records the local machine through the same error Unable to find nongeneric method... I've try to create an Update query into my table adapters but still not working with this one Also, try to remove the original_{0} and got the same error... Please help if anyone has a solution
I have a 6.5 database running on NT 4.0 that is approximately 43GB in used space. I am making a case to management for some sort of upgrade to the whole system.
What sizes of 6.5 databases would anyone consider more "risky"? Is 43GB large for 6.5 (my thought is that it is)?
Let's say for instance that you have a group of tables that stores address information for different groups (i.e. Doctors, Patients, Providers, etc.) Would it be better to create each table to store the address information or create an Address table that would store this information with an Address type and a link back to each table?
I prefer the second choice, but am having a hard time convincing other devlopers to follow this route. Maybe if I have some input from a more experienced users group I can stress my point a little more effectively. Thanks in advance for any input you can provide.
I need to import a CSV file with a few million records and 50 fields into a table. Only 1 column in the file needs to be transformed and a second column needs to be checked for data validity (e.g. don't want to let someone pass in 'CA' for an integer field.). Two approaches come to mind:
1. Use SSIS to read the file directly into the table, then apply t-sql to do a mass update to the single field that needs to be transformed. (with this approach it is not clear how to check the data valdity in each row via t-sql, though).
2. Use SSIS to import the file, 1 line at a time, transforming the data and checking its validity.as it goes. I suspect this approach will be much slower than that in 1) but I haven't tried it yet.
In a situation where one may have a single master SQL Server that ultimately needs to communicate information back down to 1000's of downstream servers, what is the recommended architectural approach?
It doesn't feel right to have to add 1K-5K routes to the master SQL Server. Is there a way to have the dowstream servers "broadcast" their existence to the master, so that new servers can be added and updates can happen seamlessly? Does this fall into a pub-sub scenario or is there a better way? And, if so, how to ensure an open conversation (so that one server doesn't miss information that all the other servers received)? Should the master dynamically create routes or better to rely on an open conversation initiated by the downstream server?
I know I've seen documentation on this but I can't find it at the time. What's the recommended file locations for a SQL install.. System and Data on a RAID drive and logs on a separate drive that's mirrored..? Oh and if anyone has links to this info let me know also.
Hi, What are current thoughts about who should own a Database? I see 3 possibilities: 1. The DOMAINAdministrator (person wo starts up the Server at Bootup) 2. 'sa', or 3. a person/user closely tied to the database.
I have a couple of years of light experience with SQL server. I'd like to start studying to take the SQL 2000 exams. I have a good test environment set up and I'm reading through the Books Online. Can anyone recommend a book or books that might be helpful for me? My end goal here is to pass the test in the near future, but I want to really learn SQL rather than just learn to pass the test.
i was pondering getting a MCDBA certification. i want to learn everything about the OS i'd use, so i just wanted to get some feedback on whether to go with NT or server 2003, etc. and anyone here recommend even getting or not bothering with the certification?
We are replicating data from server1 to server2. We expect the connection between servers to be reliable, but we can not always guarantee uptime on both ends. We do not need real-time data access on server2. What type of replication would be best? The downside we see to snapshot is that the data will be growing over time and that means the amount replicated will continue to grow. Can we setup transaction replication and then schedule the updates so it only replicates transactions since the last update? Does this present any problems if the connection is lost at any time between the servers? At this time, we will not be making any changes to the data on server2 so it does not need to be updated on server1.
I have a database app that deploys with sql 2005 express to each end-user. I would like to install sql 2005 express using Windows Authentication only. In this case, should I bother to set an sa password? And if I do set the sa password, how would I go about making sure that the sa password is different for every installation of sql express? Would it be recommended to save every end-user's sa password (possibly tens of thousands of passwords) just in case sql maintenance needs to be done on their computer? Any help would be greatly appreciated. Thanks!
I'm reading a book 'Professional SQL Server 2000 Programming' by RobertVieirathere is a recommendation: "stay away from building views based onviews"Why? What's so wrong with nested views?
I have been using the Microsoft Oracle Provider (MSDAORA) up until I needed to work with CLOB data types in Oracle. As much as I hate to switch providers at this phase in the project, I can't use MSDAORA due to the CLOB limitation.
So, what other providers are available?
I know that there is a native Oracle provider (OraOLEDB.Oracle.1) that is supported in SSIS. Does anyone have any comments on this?
Could anyone recommend a good book on the SQL-Server, please? I need to understand how to retrieve data with select statements and commands. It is urgent!
I was wondering what everyones preferred way to install a database in an automated fashion is.. IE:
You have a webapp. It sdriven by SQL Server. You need to prompt the user for a server, username, password, and database. Once you have those, you execute thge scripts against the DB.
I've been using osql.exe. but heres the situation. The installer may be run from a system, which does not have the sql server client tools installed. Which will be a problem.
So, given the situation that the machine the application is being installed on, does not have the client tools installed. How would YOU execute the provided SQL script against a remote server.
Hi,Our company is an independent Voice applications solution provider withnumber clients using our suite. We have a CT application suite which isrunning with Application Server and SQL Server 7 / 2000 as DB Enginesat the back end.The SQL server has two databases configured:Logging Database - Massive updates every second, the data growsrapidly,Configuration Database - Generally small-sized and updatedoccasionally.Now we want to have the reslience implemented on the server. We have tosynchronize the two databases 'real-timely' and in 'efficient'manner, so that if Primary server or its Databases gets unavailable,the users are seamlessly switched over to the Secondry server that willhave its own set of data updated and well synchronized.Typically, it can be explained as follows:1. We will have 2 database servers A - Primary (acting as publisher)and B - Secondary (acting as subscriber). Our application will beinitially connected to A.2. When A becomes unavailable (for whatever reason), the applicationwill fail-over to B.3. All the users will be switched to server B and the updates are beingdone accordingly without being replicated on Server A temporarily.4. When A is back on-line, A needs to be brought up-to-date with Bautomatically (In other words, I shouldn't have to manually export allthe data from B to A ).Our requirements are:- The system should support Bi-directional Synchronizationbetween both the servers for their set of databases (the logging andconfiguration).- There will be constant and heavy activity in LoggingDatabase, thus if one server gets down the data should be logged andmaintained as it is on second server and on fail-back no data-lossshould occur with minimum latency time.- There could be a scenario when a server fails-over for aweek's time, there will be constant logging each second! Once itfails-back the system should rapidly synchronize the data withoutnoticeable delay among the two server database sets.- The system should also work fine if certain amount ofrecords are purged over a time period.Our concern is, observing the above scenario, how any of your SQLserver replication strategy can help us achieve the requirements.ThanksJohn
Just wondering which scenarios is suitable to use SQLCLR. Any kind of data access is not recommended I guess. Only things that cannot be easily done in TSQL should be done in SQLCLR but why? Can't those things be done in app layer itself? Scenarios recommended for SQL CLR: - External data access like filesystem, registry etc - Complex calculation - Recursion without data access (this can be implemented with CTE for data access)
If data access with SQL CLR is not recommended why should CLR should be even used and logic reside in database layer.. it makes no sense to me. Any thoughts??
Some of our databases have many transactions (a million or more) a day. I have read that every so often I need to rebuild indexes, update statictics for all tables (however that is done), and shrink the transaction logs.
I'm confused by all this. What are the daily recommended database maintennace steps steps for database "health" and how can they be done?
I am trying to create a dimension table and I am pulling in data from two tables to create it. I need all records from table A, any records from table B that are not in table A, and I need to use the fields from B for those records that do match. What would be the best way to approach this, merge join + derived columns, union all + aggrigation? Any suggestions?
It seems like it's harder to do this in ssis rather then just doing it in the database.
I did a search (google and on the forums) and found a few suggestions here and there, but I'd like something more complete to follow as far as naming conventions are concerned.
I wrote my first DB based on MySQL/Ruby/Active Record type naming convention...
- plural table names - all lower cased - underscores between words - "id" is auto incrementer for each table - something+"_at" is for datetime fields - something+"_on" is for date fields - referencing the primary id in another table is "tablename (singular)" + "_id".
This worked great in Ruby/MySQL, but in C#/SQL Server, its an ambiguity nightmare! All of my "id" fields conflict and alot of my tables have "added_at" datetime fields and they all conflict with each other. Essentially, any field that's named the same in one table as in another conflict on joins.
For example: users post comments to stories submitted by users...
table = articles field 1 = id field 2 = title field 3 = body field 4 = user_id
table = comments field 1 = id field 2 = title field 3 = body field 4 = user_id field 5 = article_id
Trying to join these two tables is an ambiguity nightmare but I'd like to not have to name every field uniquely or start adding table prefixes to them all...
I guess I just need some good suggestions or links to recommended table structure/naming conventions for SQL Server. Thanks in advance!
We have just started using SQL 2005 and released our first few projects to prodcution. We are currently using msdb storage for SSIS packages in production using the 'rely on server storage' for protection level and separating each subject areas by folders under msdb in the management studio.
However some of our DBA's feel that this is not the right approach and we should be storing as XML.
Anyone has any recommendation for either or considerations to be taken when deciding what storage to use?
Would anyone have a suggestion on how to setup a partner to partner NIC configuration for heartbeats/mirroring traffic? I've been told this is the recommended setup but have not found much on how to do it. We currently have a teamed NIC config for redundancy, but would like to have a separate set of NICs on each partner so that mirroring traffic is not interrupted by any regular network traffic.
We also have a witness running in full safety mode. Does this mean partnerA and partner B both need NICs with a crossover cable between them AND is it recommended for the witness to also have extra NICs to both partnerA and partnerB (w/ crossover cables)?
Any suggestions/help/links on properly configuring this would be appreciated.
what is the recommended data type i should use if i want to have a price field that can include "TBA". i can't use smallmoney i suppose, so i should use VARCHAR then validate the String with Visual Studio?
I have an app that will have up to 13 PC's. Each machine will be logging data to the 200X Express Server every 5 seconds. The data size sent each time will be around 1K each.
1. Is this within the limits of 200X Express? 2. At what point do you decide that Express is bottlenecked and Standard is needed?
As stated in the subject I have a situation where if database mirroring is employed for either manual or automatic failover, all the client (including web connections) connections use ODBC not ADO, or OLEDB etc... so what methods are recommended? Client side redirect is not available so I could not employe the "Data Source =A; Failover Partner=B..." option.
Right now the method employed (pre database mirroring and basically employing log shipping on SQL 2000) is to have a DNS alias for the ODBC connection so that if the server were to change in a failover situation the DNS record would have to be altered, so that all the client connections would not have to be reconfigured.
I'm using sql server 2005, I created a customer table and set the customer_id as an uniqueidentifier that is a NEWSEQUENTIALID()...when a user is created I want to obtain the last customer_id so that I can insert it in another table that has a one to many relationship with the customer table. Please help...how do i do this?.Thanx
I'm working on a search engine right now, and I am querying out a set amount of information at one time. How can I grab the last row count of a query, and use that as a starting point, or index when a "Next 100" is pressed. For instance, a person searches through a user table, but only wants 25 users at a time. So the beginning index is 0 to 25. In my query, I have "SELECT TOP 25 from users", but now when the person searching want's to move to the next 25 users, so the index would be 26 to 50, how can I hold the last row displayed? It gets a little more complex also, if a user is searching by last name, the rows will be staggered. Is there a SQL function which will grab the number of the last row displayed? I've looked through the SQL Book, and was unable to find a function that does this, or anything close to it. I hope I made sense of this question, if you need more information, email me and I will try to clarify. Thank you for any information or hints.
If I have say 20 records of sales employees, for example, how can I get the top 3 locations for $$$sales for EACH employee? Each employee can have multiple locations where they have sold(let's say up to 50). I only want the names of the top 3 locations. The closest I can get is filtering the dataset by a HAVING clause > a dollar amount but this still gives me between 3 - 12 records for each plus I have to literally enter each salesperson's number as it stands now. Is this a loop or a cursor? Thanks.