I use SQL Server 2005 Dev Edition and am not new to making databases (then again, I've had enough experience and my dad does the same thing).
I am (unfortunately) a university student and for my dissertation I am going to produce a SQL Server database with a strong emphasis on data mining.
Obviously, for the data mining to be useful at all I need to produce loads and loads of test data.
Fair enough, and there are applications which do this, such as EMS Data Gen, but can anyone recommend me any other data gen utilities? EMS Data Gen has poor handling of unique attributes, and as I am doing a car manufacturer this will give me problems when I come to the registration number attribute.
Also, why are utilities for SQL Server (and Oracle at that) so expensive? This makes it out of my reach and makes it difficult to build a truly good database that will net me good marks, and demotivates me. :(
Lastly, please feel free to recommend to me any utilities for SQL Server - such as performance monitors, backup utilities. Anything. But if they are priced utilities, they have to be sensibly priced (<£100), because I cannot yet afford to pay >£1k on such utiltiies.
I've been tasked to generate some test data (a few thousand rows) into a new table in a new database. This database is a whole new idea, so I can't write a query to pull pieces of data from other databases. I cannot consider any third party tools, such as what Redgate or Idera has to offer. I can't consider free tools such as what I've found on GitHub. I've been instructed to restrict myself to Visual Studio 2013 and whatever I can get that works within that.
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
We have both a production SQL 7 server, QA, and Development. From time to time, I want to move just the data from the production server to the other 2 servers without modifing the objects that may have been changed such as stored procedures and rights. Is there a way using the SQL tools provided that we can just move the data. Becuase also what happens is that the rights to the objects change which means my developers no longer have access to the tables for selects in QA since the changes where overwritten by production where they do not have the rights.
I'm building a proc to generate fake stock portfolios for testing. I have a list of thousands of symbols, and I want the tester to be able to select how many symbols they want in their fake portfolio, and then give each symbol a random weighting (i.e. percentage held in that security) which, across all the symbols, sums to 100%. The securities here are not the part I care about, it's the weightings summing to 100 that's important.
So test data would look something like this:
/* --This is the repository of potential symbols I can add to a fake portfolio. -- So the simple part is basically select top (@symbolCt) from #PossibleSymbols, plus some magic I have yet to determine if object_id('tempdb.dbo.#PossibleSymbols') is not null drop table #PossibleSymbols create table #PossibleSymbols ( SymbolID int ) insert into #PossibleSymbols (SymbolID)
I've built my SQL Server Express database with SQL Serevr Management Studio Express, and now I want to enter some seed data to assist in building tha app around it. I cannot find an option to manage the data in SQL SMSX, like I used to with Enterprise Manager. I don't want to have to write an app just to get test data in. Seems like this should be a common need. Am I missing something obvious here? Can't find any reference to this in a search of the forums. Please help.
I have a closed polygon that coincidently is in the shape of Iowa :) I have a point that is within the state and a point WELL outside it, but I get weird results that I don't expect when I try to get it to tell me that the point is within the polygon. Here is some basic code, with long coordinates data.
(1 row(s) affected)As I read that there is a distance of about 7864 meters, this is close to what I would expect, so that's ok. The point outside I would expect a distance as well so that is confusing.. Then we have the intersects, it says that the point inside does NOT intersect but the one outside DOES, this is backed up by the intersection values.
Is there any tool available to migrate the data from the SQL Server test database to SQL Server production database. Data Migration should be based on a condition which can be given as an input for a table by the user. The dependant tables also should be migrated based on the given condition. i.e data subsetting based on the matching conditions.
Ex : Salary > 2000
The rows of the table which matches the condition alone need to be migrated for the corresponding table. Also its dependant table's rows should be migrated based on the given condition. Please help me with a tool which can automate this.
We are setting up a test lab environment with 100 machines. Â We want one master testing db that gets replicated to each to run scripted application tests nightly. Â
My goal is to minimize the amount of work to move this thing to each of the 100 test machines. Â I am wondering if we need to even have the sql local and invest in a monster db server with 100 copies of the db we restore and each test machine point to their own db on that server, or if I should use db mirroring or something to get the master test db to each of those machines instead.
Sorry about the huge post, but I think this is the amount ofinformation necessary for someone to help me with a good answer.I'm writing a statistical analysis program in ASP.net and MSSQL7 thatanalyzes data that I've collected from my business's webpage and thehits it's collecting from the various pay-per-click (PPC) engines.I've arrived at problems writing a SQL call to generate certainstatistics.Whenever someone enters our site from one of the PPC search engines, Iwrite out a row to the Hits table. In that table are the followingcolumns:HitID - the Unique ID assigned to each hit that comes into the siteKeyword - the keyword the user searched on when he or she came to thesiteSearchEngine - the PPC engine the user came fromSource - this is pretty much always 'PPC'...if we were to do otherthings, like a newsletter, then this would be different.TimeArrived - the date and time the user arrived at the website. Ihave no idea why I didn't call it "datearrived," since I use "date"and not "time" pretty much everywhere else...(I don't think the rest are important, but they might be, so I'llinclude them for completeness's sake)Referring URL - the URL the user came fromReferring Website - the string between the 'http://' and the first '/'in the URL. I know it's redundant information, but when I designedthis part, I didn't know how to parse it out afterwards, so I justfigured I'd duplicate it.Page Visited - the page the user first arrived atWhen a person comes to the site, I also write out a session cookiecontaining the user's hitID. If the person fills out an enrollmentform (a process which we refer to as "responding"), I attach thatsession ID to the form. The response form (and thus the responsestable) is long; these are the important fields:id - a unique ID for each responsedate - the date and time of the responsestatus - a varchar field containing a status code. I would have madeit a number, but I wanted it to be viewable from looking at the rawdatabase.hitid - the HitID of the user, taken from the session cookie. If thereis no session cookie (for whatever reason), the HidID is written outas 0. While it wouldn't occur often, I can't guarantee that there willnever be more than one response record attached to a singular hitid.Later, some of the responses turn into "confirmations", which meansthat they've actually ordered from us, not just filled out the form.This usually happens about three or four days after the initialresponse. When this happens, the status of the response is changed toa phrase containing the word "confirm" in it (there are a few of them,but they all contain that word).So now that we've collected all this marketing intel., I need toanalyze it.I've written a parser that takes reports from various pay-per-clickcompanies and puts them into a table called PPC. Information in thiscolumn is written out as one record per search engine per keyword perday. The schema is as follows:id - a unique ID for the record in the tabledate - the date to which the information in the record appliessearchengine - the PPC engine to which the information applieskeyword - the keyword to which the information appliesclicks - the number of clicks on the applicable keyword on theapplicable search engine on the applicable day.impressions - same as clicks, but for impressionscpc - the cost per click on the applicable keyword ...avgpos - (I don't always have a value for this field) The averageposition that the keyword was shown in for the applicable keyword ...With this data in, the last step is actually analyzing the threetables for useful statistics on the various keywords, search engines,and time frames. That's the step I've been trying to complete.So what I need is a SQL call that I can run that generates a tablewith the following information:SearchEngineKeywordCost / Click - When calculating the CPC, I can't just take an averageof all the records. I need to calculate the total amount spent per day(clicks * cpc), add that up for every day, and then divide that by thenumber of total clicks. Just doing an average doesn't take intoaccount the fact that some days we'll get more clicks than others.Total Spent - # Clicks * CPC#Responses - counting the number of records in the responses table#Confirms - counting the number of records in the responses table with"confirm" in their statusTotal Spent / #ResponsesTotal Spent / #ConfirmsOh yeah, and I want to be able to order by any four of the fields inany order, narrow my selection to only those keywords that either areor contain a user-specified string, further narrow my selection toonly those records that fit other user-specified criteria for any ofthe columns in the table I'm generating, and select only the top xrecords (where x is a user-specified number). I already haveuser-controls that output the SQL for all of these things, but I needto have places in which I may put that SQL in my call.After many trials and tribulations, I've come up with the followingSQL call. Right now, its output for nearly every row is incorrect, Ithink in a large part due to the fact that the method that I'm usingto generate the number of clicks is yielding incorrect values.If you'd like to help me and you think that modifying the followingcall is easier than writing a whole new one, be my guest; if you'dprefer to write a new one, I'm game for that, too. I'm just concernedwith its working right now, and any help you can give me is greatlyappreciated.Anyway, here's the call:/*sp_dboption @dbname='NDP', @optname='Select Into', @optvalue=true;*//*Running the above might be necessary to get the "Select Into"s towork*/Drop table ResponsesPPCDrop table ConfirmPPCDrop table TempPPCSELECT Responses.[ID] as [ID], Responses.Status, PPC.SearchEngine,PPC.KeywordInto ResponsesPPCFROM Responses, PPCWHERE Responses.HitID IN(SELECT Hits.HitIDFROM HitsWHERE Hits.SearchEngine = PPC.SearchEngineAND Hits.Keyword = PPC.Keyword)SELECT ID, Status, SearchEngine, KeywordInto ConfirmPPCFROM ResponsesPPCWHERE Status LIKE "%confirm%"Order by SearchEngine, KeywordSELECT PPC.SearchEngine, PPC.Keyword,SUM(PPC.Clicks), /*I noticed that thiscolumn gives me incorrect values(I don't need it in my final report, but it's useful for debugging).For some keywords, it gives me huge numbers(e.g. 265 clicks on one word that got ~10 clicks /day over five days),and for others, it doesn't give me enough. I think this is a majorpartof what's throwing off the rest of the statistics*/Case SUM(PPC.Clicks) WHEN 0 THEN 0 ELSESUM(PPC.clicks * PPC.cpc) / SUM(PPC.Clicks) END as CPC,SUM(PPC.clicks * PPC.cpc) AS TotalCost,count(ResponsesPPC.ID) As NumResponses,Count(ConfirmPPC.ID) As Confirms,(Case Count(ResponsesPPC.ID) WHEN 0 THEN 0 ELSESUM(PPC.clicks * PPC.cpc) / count(ResponsesPPC.ID) END) ASCostPerResponse,(Case Count(ConfirmPPC.ID) WHEN 0 THEN 0 ELSESUM(PPC.clicks * PPC.cpc) / count(ConfirmPPC.ID) END) AsCostPerConfirmFROM (PPC LEFT JOIN ResponsesPPC ON PPC.SearchEngine =ResponsesPPC.SearchEngineAND PPC.Keyword = ResponsesPPC.Keyword)LEFT JOIN ConfirmPPC ON PPC.SearchEngine = ConfirmPPC.SearchEngineAND PPC.Keyword = ConfirmPPC.KeywordGROUP BY PPC.SearchEngine, PPC.KeywordOrder by PPC.keyword desc/*Drop table ResponsesPPCDrop table ConfirmPPCDrop table TempPPC*//*I don't drop them right now so I can look at them,but normally, one would drop those tables.*/Thanks a lot for your help,-Starwiz
I have been generating report models for users to use with Report Builder and there is no data when they select the model. I noticed that the tables I chose did not have a primary key and when I chose a different table, with a primary key, and generated a model from it, then there was data for the user to use in Report Builder.
Is there a documented work around or will I need to set a primary key on each table?
I am attempting to explain my probelm again. Please read it:
I have 3 tables. CallDetail, Call and Request. The tables are populated in the following order: One row for CallDetail, One for Call and one for Request and so on.
I have to generate a UniqueNo - Per empid, Per StateNo, Per CityNo, Per CallType. The no will remain same for the same CallDetailID and ordered by the date created. However if the CallDetailId changes, the no. will increment based on the empid, Per StateNo, Per CityNo, Per CallType
For eg:
For Eg: ( Assume Call Detail id is changing for all the days) Monday - 3 calls made for empid 1, state SA023, city 12 and call type 1 will generate a unique id 1 for all 3 calls Tuesday - 2 calls made for empid 1, state SA023, city 12 and call type 1 will generate a unique id 2 for both calls Wednesday - 3 calls made for emp id 1, state SA023, city 12 and call type 2 will generate a unique id 1 for 3 calls as the call type is different than the previous day for same employee Thursday - 2 calls made for empid 2, state SA023, city 13 and call type 1 will generate unique id 1 for both the calls as combi of city and call type are different.
So the unique id has to be generated considering empid, state, city and call type, ordered by the EntryDt. EntryDt is needed because : 3 calls made for empid 1, state SA023, city 12 and call type 1 at 10/11/2007 10.00 AM will generate a unique id 1 for all 3 calls 2 calls made for empid 1, state SA023, city 12 and call type 1 at 10/11/2007 12.00 AM will generate a unique id 2 as the call was registered later.
Here is what I wrote with the help of a mod over here:
INSERT @Request SELECT '324234', 'Jack', 'SA023', 12, 111, Null UNION ALL SELECT '223452', 'Tom', 'SA023', 12, 112, Null UNION ALL SELECT '456456', 'Bobby', 'SA023', 12, 114, Null UNION ALL SELECT '22322362', 'Guck', 'SA023', 12, 123, Null UNION ALL SELECT '22654392', 'Luck', 'SA023', 12, 134, Null UNION ALL SELECT '225652', 'Jim', 'SA023', 12, 143, Null UNION ALL SELECT '126756', 'Jasm', 'SA023', 12, 145, Null UNION ALL SELECT '786234', 'Chuck', 'SA023', 12, 154, Null UNION ALL SELECT '66234', 'Mutuk', 'SA023', 12, 185, Null UNION ALL SELECT '2232362', 'Buck', 'SA023', 12, 195, Null
DECLARE @Call TABLE(CallID INT, CallType INT, CallDetailID INT) INSERT @Call SELECT 111, 1, 12123 UNION ALL SELECT 112, 1, 12123 UNION ALL SELECT 114, 1, 12123 UNION ALL SELECT 123, 2, 12123 UNION ALL SELECT 134, 2, 12123 UNION ALL SELECT 143, 1, 6532 UNION ALL SELECT 145, 1, 6532 UNION ALL SELECT 154, 1, 6532 UNION ALL SELECT 185, 2, 6532 UNION ALL SELECT 195, 3, 6532
-- Query written with help of a helpful person here UPDATE r SET r.UniqueNo = dt.CallGroup FROM @Request r JOIN @Call c ON r.CallID = c.CallID JOIN (SELECT CallDetailID, EntryDt,EmpID, CallGroup = ROW_NUMBER() OVER (ORDER BY EntryDt ) FROM @CallDetail ) dt ON c.CallDetailID = dt.CallDetailID select * from @Request
as the call for Buck is of calltype 3 which was not done earlier. So the no starts from 1.
Also how to add the paritioning by empid, StateNo, Per CityNo, Per CallType and yet maintain the same unique no for the same calldetailid. Eg: CallGroup = ROW_NUMBER() OVER (PARTITION BY empid, state, city, calltype ORDER BY EntryDt )
Can a stored procedure in SQL Server 2005 generate XML data based on the schema, We don't prefer to manually build an xml string inside the stored proc?
Is there any SQL Server 2005 feature to do it if possible?
The tables are populated in the following order: One row for CallDetail, One for Call and one for Request and so on
I have to generate a UniqueNo - Per empid, Per StateNo, Per CityNo, Per CallType and insert into #Request table along with the other data. How do I do this?
SAMPLE DATA
Code Block Insert into #CallDetail(12123,1) Insert into #CallDetail(53423,1) Insert into #CallDetail(6532,1) Insert into #CallDetail(62323,1) Insert into #CallDetail(124235,1) Insert into #CallDetail(65423,2) Insert into #CallDetail(56234,2) Insert into #CallDetail(2364,2) Insert into #CallDetail(34364,2) Insert into #CallDetail(85434,2)
Insert Into #Call(111,1,12123) Insert Into #Call(112,1,53423) Insert Into #Call(114,1,6532) Insert Into #Call(123,2,62323) Insert Into #Call(134,1,124235) Insert Into #Call(143,2,65423) Insert Into #Call(145,1,56234) Insert Into #Call(154,2,2364) Insert Into #Call(185,1,34364) Insert Into #Call(195,1,85434)
Insert Into #request Values('324234','Jack','SA023',12,111,0); Insert Into #request Values('223452','Tom','SA023',12,112,0); Insert Into #request Values('456456','Bobby','SA024',12,114,0); Insert Into #request Values('22322362','Guck','SA024',44,123,0); Insert Into #request Values('22654392','Luck','SA023',12,134,0); Insert Into #request Values('225652','Jim','SA055',67,143,0); Insert Into #request Values('126756','Jasm','SA055',67,145,0); Insert Into #request Values('786234','Chuck','SA055',67,154,0); Insert Into #request Values('66234','Mutuk','SA059',72,185,0); Insert Into #request Values('2232362','Buck','SA055',67,195,0);
EXPECTED OUTPUT will be (See the last column for unique nos). :
Code Block Insert Into #request Values('324234','Jack','SA023',12,111,1); Insert Into #request Values('223452','Tom','SA023',12,112,2); Insert Into #request Values('456456','Bobby','SA024',12,143,1); // Calltype = 1 empid= 1, but state is different, hence unique id is 1 Insert Into #request Values('22322362','Guck','SA024',44,114,1); Insert Into #request Values('22654392','Luck','SA023',12,123,3); Insert Into #request Values('225652','Jim','SA055',67,143,1); Insert Into #request Values('126756','Jasm','SA023',69,134,1); Insert Into #request Values('786234','Chuck','SA023',72,145,2); Insert Into #request Values('66234','Mutuk','SA059',72,185,1); Insert Into #request Values('2232362','Buck','SA055',67,195,2);
Please note that this will not be run as a batch query, but the no. has to be generated and inserted into #record table in realtime. I have given bulk of records for understanding of the problem
Now that we have a good programming model in SSIS - the question is whether to write automated unit tests for your packages, and would it generally be a good idea for packages?
Also - if yes to write tests - then where to find more informations regarding How to accomplish that?
hi every one, i need to test SSIS pacakge which will import data from different database where record count is around 5 millions. iam planning to test it through c# code as well as manually also. SSIS source : consist of 7 tables SSIS destination :consist of 7 tables Using c# code iam trying to run ssis package through batch file. i am putting expected rowcount, column count in an excel file and comparing same with destination tables by writing query implementing ADO.Net concept. am i going right way ,can any one suggest best and productive way to test the ssis package . what are the other things i need to test it. do any one can add test cases to it.
S.No
Test Case
1
Verify all the tables have been imported.
2
Verify all the rows in each table have been imported.
3
Verify all the columns specified in source query for each table have been imported
4
Verify all the data has been received without any truncation for each column.
5
Verify the schema at source and destination
6
Verify the time taken /speed for data transfer
7
Fields truncated due to difference in length of the field at destination. Regards Arif shareef
Hi: I have SQL Server 2005 Express edition and I am trying to generate a script that someone can take and import into the full version of SQL Server. Using the Generate Scripts Option, I have been able to generate scripts for the various SQL statements that I created - but cannot get it so that the data is included. I'm new to this and would appreciate any help. I have a populated database that I would also like to transfer to the new server.Any help greatly appreciated.Roger Swetnam
hi, I'm using sql server 2005 standard, and I want to be able to move my local database to another server, but I can't figure out how to script the database and the data so that I can just run one script to move the whole database. this can be done right? I can't imagine that such an obiviously necessary tool would be intentionally left out, so I'm figuring that I'm just a doofus and don't know where the option is...
Is it possible to write a SP (Automate) to generate STATISTICS on any database and then use the output to create the stats on that database.
I ran the tuning adviser and it suggested indexes with lot of STATISTICS on the dev environment. This dev environment is replicated in several other environment with data size in these environment varying. I would  like to know if I can create a SP which generates STATISTICS information pertaining to specific database environment for the query in question for tuning.Â
When I attempt to generate a datasource model I get the following error messages: ------------------------------------------------ More than one item in the Entity 'Customer' has the name 'Customer Merge Custs'. Item names must be unique among immediate siblings. (DuplicateItemName)
More than one Field in the Entity 'Customer' has the name 'Customer Merge Custs'. Field names must be unique within an Entity. (DuplicateFieldName)
More than one item in the Entity 'Pricing Service Layout Detail' has the name 'Pricing Service Extensions'. Item names must be unique among immediate siblings. (DuplicateItemName)
More than one Field in the Entity 'Pricing Service Layout Detail' has the name 'Pricing Service Extensions'. Field names must be unique within an Entity. (DuplicateFieldName) ---------------------------------------------------
Examining any of the above tables in SQL Server Management Studio does not reveal any duplicate column names. In fact, 'Customer_Merge_Custs' does not appear to be a column in 'Customer' nor does 'Pricing_Service_Extensions' appear in 'Pricing_Service_Layout_Detail'.
As an experiment, deleting the table 'Pricing_Service_Extensions' and regenerating did make the two associated messages go away.
Basically, I want to set GeneratedDesc = Data1 + ' ' + Data2 + ' ' + Data3 where an account sets the order 1,2,3
GeneratedDesc = Data2 + ' ' + Data3 + ' ' + Data1 where an account sets the order 2,3,1
Basically, The Generated Description is set in an order that is chosen by the Account.
I am not sure how to go about doing this, outside of dynamically generating the query and looping throughout all the rows in the table, which, i think for large amounts of data, can get expensive. I don't think creating a query for each combination would be good either (in this case, 6 combinations, but for larger order sets, such as 6, can get quite alot of queries).
any ideas? (not sure if this makes sense to anyone)
We have an MIS system which has approx 100 reports. Each of thesereports can take up to several minutes to run due to the complexity ofthe queries (hundreds of lines each in most cases). Each report can berun by many users, so in effect we have a slow system.I want to seperate the complex part of the queries into a process thatis generated each night. Then the reports will only have to querypre-formatted data with minimal parameters as the hard part will havebeen completed for the users when they are not in. Ideally we willgenerate (stored procedure possibly) a set of data for each report andhold this on the server. We can then query with simpler parameterssuch as by date and get the data back quite quickly.The whole process of how we obtain the data is very complex. There arevarious views which gather data from the back office system. These arevery complex and when queries are run against them including othertables to bring in more data, it gets nicely complicated.The only problem is that the users want to have access to LIVE datafrom the back office system, specifically the Sales team who want toaccess this remotely. My method only allows for data from the nightbefore, so is there an option available to me which will allow me todo this ? The queries can't be improved on an awful lot, so they willtake as long as they take. The idea of running them once is the onlyway I can see to improve the performance in any significant way.True I could just let them carry on as they are and let them sufferwith the performance on live data, but I'd like to do something toimprove the situation for them.Any advice would be appreciated.ThanksRyan
I have a stored procedure that is pulling 3 parameters: @user_id, @, begin_date and @end_date. The parameters are setup in the 'parameters' tab of the data set, and also the 'report parameters', however, when I go to run report, I get textbox for user_id, instead of a drop down with pick list.
I tried creating a separate dataset to bring in user_id's only and manually create a parameter for it in 'report parameters'. I then get a drop down box with repeating data, and when I run the report, I get back all user_id's instead of the one I chose.
I'm finding the parameters are the most difficult concept within RS. Does anyone know I can make this work?
How can i insert test Data in to the Database,I want to insert one million records in to the table,This is to test Database Performance. Can anyone help me in this regard,Do we have any scripts for this purpose??? thanks Mar
I've seen that sometimes is better to split the table into a test dataset and a training dataset, and I'll appreciate if anyone can explain why is this...
We have a website that accesses our SQL databases. In the past, we used our internal employees to improve our SQL databases. However, we want to outsource the work.
There is a lot of information we would like to keep private from the outsourcing.
Is there a way to efficiently make test data throughout our database without changing our original database? Is a way to easily update our database to the new changes?
I found this product through Google€¦ EMS Data Generator for SQL Server http://www.sqlmanager.net/en/products/mssql/datagenerator Would this program help us make test data?
I run XP Professional on a home machine. I recently installed VB 2003 with MSDE. When I go to the server explorer window, right click to add a connection, the Data Link properties dialog box pops up as it should. I select the name of home server, indicate integrated security, and select a sample database such as model or master from the list. The test connection is successful and clicking OK causes the connection to appear in the Server Explorer window. Yet when I go to open the connection, there is no data. The folders for Tables, Stored Procedures and so forth appear but they are empty. I have another 2000 database that a friend sent me. If I try to indicate in the Data Link propetries dialog box that I would like to attach this database, I get the same symptom. Test connection successful, yet when I create and open this connection, the folders are empty. I am a newbie to ADO .NET and I am not sure where to begin. Can someone help?
There is a production database which has ever increasing data. For testing purposes though, I would like to build a test database with exactly the same schema but only a subset of data copied from the production database . I'll specify the criteria (something like a where clause in select query) for copying the data from the production database.
Is there a tool that anyone has come across to do this job ?
I have two SQL Databases on separate servers, live and test. I have been asked to copy the data from the live system and put it into test. They are SQL Management Studio 2008 running on MS Server 2008R2.
Could a simple backup of the database, then copy that file to the test system and restore the database from that point work or it there more to it?
The test.sql scripts I write to test CLR stored procedures run successfully, but when I want to display the resulting data in the database with a simple "SELECT * from Employee"
I get the result as: Name Address ---- ------- No rows affected. (1 row(s) returned)
But not the actual row is displayed whereas I would expect to see something like:
Name Address
---- -------
John Doe
No rows affected.
(1 row(s) returned)
I have another database project where doing the same thing displays the row information but there doesn't seem to be a lot different between the two.
In 2000, BCP seemed the way to go. DTS packages would also work. My question is, in 2005, what is the best choice? I seem to remember that BCP ignored all referential integrity constraints, and applying them afterwords was a royal pain. I'm not a BCP expert by any means. Running this at the command line means using the DOS prompt correct?