I have a content site where everything is currently in one SQL ServerDB. As I add features to the site, for example message boards andblogging, does it make sense to put those features in a separatedatabase? What would I lose and gain in doing so? Thanks so much.Erik
Are they're any real disadvantages or advantages in having 1 massive disk partioned to create 2 logical drives (not including the C drive) and separating the SQL Database File & Transaction Log so that it doesnt reside on the same logical drive?
Hello,If I wrote the next ebay (yes I know, yawn-snore) and I had a databasewith 5 million auction items in it, what would be a really goodstrategy to get a search done very quickly? Would it involvesomething called OLAP and/or "data mining"? The only technology I amfamiliar with is simply SQL Server databases with stored procedures.I think I'd be guessing correctly and say that this technology simplywouldn't be fast enough *on it's own* to do super fast queries againstmassive amounts of data.Any insights would be of great interest. Thanks.-Frameworker.
While creating a model, i can access to relational tables as data source, but not to olap cubes. Is it possible to browse olap cubes for mining? If yes, how? Thanks...
I've tried to use the export command to export a mining model from management studio, but it returns that export statement is not supported for OLAP mining structures, Ive checked the EXPORT(DMX) reference I can't see any note that it is not applicable on OLAP structures.
I am wondering where can I store my mining results in data mining engine? For example, I got mining results like accuracy chart, decision trees, and other formats of results based on different mining algorithms I used for my data mining, so where can I actually store the results for reporting service use later? Is it possible to do that in SQL Server 2005?
Thanks a lot for any help and guidance in advance.
Our OLAP environment involves an ETL/Data Warehouse/Data Mart server and a cube publisher server. We would like to learn how to automate the Archival/Restore of OLAP databases. We are currently doing it manually though OLAP Manager. Any help would be appreciated. Thanks. James.
-- James E. Bothamley Senior Database Administrator Dave & Buster's, Inc. 2481 Manana Dallas, TX 75220
Work Phone (214) 904-2296 email jbothaml@DaveAndBusters.Com
"Once in a while you can get shown the light in the strangest of places if you look at it right"
hi, i am thinking of using autonumber in my table which is more likely to grow over time. but i am also concerned that what if the records get too much? is there is any other fields to use instead of autonumber? is uniqueidentifier a better option? i have a bit of discussion with my colleague about this matter.
Hi, I have just run a simple data set through a model to predict a simple true or false value (i.e. binary output) The Lift Chart/Mining Legend in Analysis Services shows three results €“ Score, Population Correct (%), and Predict Probability (%)
Population Correct I beleive is the percentage of predictions it got right out of the total number of predictions it tried to make. Is this correct?
However, I can€™t work out how the other two are derived in particular the 'SCORE'. To give a live example the scores were as follows:
Model Score Pop Correct Pred Probability Decision Trees 0.83 76.59% 54.28% Neural Network 0.75 67.63% 50.05% Ideal Model 100.00%
Can anyone help with this and give a detailed explanation?
I am trying to model data in analysis services with the Advance Create Mining Model function in the excel addin. I am having trouble creating an association model that works like the Associate button above the Advanced button.
The format of my data is like this
OrderID Product
100 Bike
100 Helmet
100 Shoes
200 Helmet
200 basketball
200 Bat
300 Shoes
300 Socks
The associate button works perfectly since it asks me which column is the transaction id (orderid) and which column I am trying to predict (product). The advanced create mining model asks me to determine what the columns are...
OrderID=key Product=Input+Predict?
When I run the advance create mining model associate, I get a browser that gives me no rules and the support for only one item itemset (each product but no combination of products).
Does anyone know what I have to do to get it to work like the associate button?
I'm about to use uniqueidentifier and NEWID() for my new DB,
The DB may grow wild and I want to super safely create connections between tables, so it would not just rely on numbers (that can B changed) as used with IDENTITY...
If U had some BAD experience with this or if U have any tips I can use please let me know
+
Should I take any special steps when using index on it ?
I have a hosted solution where the web host provider tells me I'm running on a 2005 platform, but when I check the DB options, it shows compatibility level 80. The provider tells me I'm not losing any performance by running in the lower compatibility mode. Is this truly the case? What else am I losing out on in terms of features or capabilities?
I would like to develop an application that can create Data Mining structures and a mining model in SQL Server 2005 with VB.NET. I tried the code from book Data Mining with SQL server 2005 in chapter 14 but did not work. Any good idea?
Thank you very much for your help. The errors that I can see in the code that you gave in your answer are the following and they are more or less the same as I had previously
I tried the code but initially I have encounter the following problems.
1. In any line that have the declaration As Server, As Database like in Public Function CreateDatabase(ByVal srv As Server, ByVal databaseName As String) As Database gives me the problem that type Database is not declared the same type Server is not declared and it does not give me any option.
2. In addition to that for As DataSource, As RelationalDataSource, As RelationalDataSourceView, As ScalarMiningStructureColumn, As DataSourceViewBinding, gives me the problem that type is not declared.
3. Finally in mc = New MiningModelColumn("Yearly income", Utils.GetSyntacticallyValidID("Yearly income", Type.GetType(MiningModelColumn))) is not accesible in this context because it is 'Private'. I have some more problems but I thing that by solving the above that I referred I will solve the rest.
I perform data mining on all products and a specific product category. Do I need to create 2 data source views, one for all products and the other one for the specific product category? Afterward, I run the Data Mining Wizard 2 times to create 2 mining structures. I also need to add the same mining model (e.g. Bayes, Cluster) to each of these mining structures. Is there any simple way to do it?
I just found that I am not able to view the accuracy chart for my mining model. The error message is: no mining models are selected for comparision. Which is quite strange.
hi ! my boss is thinking of redoing our accounting system which is currently running on FOXPRO - we are planning on a VB/SQL SERVER platform.but he isn't convinced that the benefits of SQL server outweigh those offered by FOXPRO especially since everyone in the office is very comfortable with Foxpro.Can anyone give me some solid advantages of SQL SErver or any other RDBMS over Foxpro ?
If I have an application that has at most 20 users with an average of 3-4 concurrent requests to the server and the databases size is 1 gig probably to grow to at most 1.5 gigs in the next 5 years why would I choose Express over MSDE?
MSDE can take advantage of more than 1 gig of memory and can use 2 CPUs. I really don't see any benefit whatsoever in my case to go to SQL Express, in fact all I see is drawbacks.
I hear about upgrading all over the place but I just don't see any good reason in my situation. Am I missing something here?
This may be too general a question but I'm going to ask it anyway.
I'm moving data from a source DB (say A) to a target DB (say B). On A I need to join 3 tables and, after some lookups etc., I need to populate several tables in B. Inserting into B's tables involves sequential operations because in many cases I have to get back the value of an Identity column to use as in input value in a another table 'downstream'. Additionally, the tables in B are populated as a group i.e. if the insert on any one fails the entire group's insertion needs to be rolled back.
I set up a set of stored procedures to do this. The master Stored Proc opens a read-only cursor and for each row of the cursor executes the other SPs in proper sequence. Some of the SPs are 'enclosed' within a transaction to enable a Rollback on the group.
My major concern with this approach was the 'known' inefficiency of the cursor and the huge memory requirement it's use would entail (the cursor would pull about 15 million rows).
So I began looking into SSIS thinking it would be able manage the system resources aspect effectively and offer better performance overall. I've realized, however, that even in SSIS I would essentially need to first pull the 14 mil rows into a memory-resident object (or a temp table - whose benefit I'm not convinced of or haven't fully understood) before looping through each row to perform the data inserts into B.
So, is there any real advantage to this over the first approach ? Perhaps I haven't looked deep enough or wide enough. Any constructive suggestions / feedback would be highly appreciated.
What is the advantages/disadvantages of using Database Diagram and link all the tables in MS SQL Server Management Studio versus letting the application check and link the different tables at run time? Currently, I do not have all my tables linked in a Database Diagram. I do everything at run time in my application code behind. What are the best practices? Which is easier or perhaps more secure?
Hi, Generally I write all my SQL in Stored Procedures instead of using adhoc queries. But I dont feel good about stored procedures when I come across situations like this.
Lets say that I have a stored procedure something like this
CREATE PROCEDURE dbo.proc_MYSP @CaseID char(10) AS SELECT * FROM TABLE1WHERE CASEID = @CASEID
Suppose in future if the field CASEID is changed to char(20) then I need to change the declaration of CaseID in all my stored procedures that take CaseID as input parameter. If I write adhoc queries then I need not worry about this. Is there any effective solution for a situation like this.