Can anyone give any advice to finding db files for testing. CSV files or what have you. I want to have some nice fun and repetitive Transact practice in creating db's and users etc. I have seen some online but I wondered first if anyone here used a site in particular that they could recommend. Would be nice to have files that may be standard (csv) and in another db format that I could use, mix it up some.
I have a DB I want to make a copy of for testing purposes so we don't crash our production DB. I want the new "Test" DB to reside in the same SQL Server Group on the same SQL server. I have tried the Copy DB wizard but it will not allow you to copy the DB to the same server. What is the best way to do this?
I have what seems a simple requirement. We want to import the contents of a SQL Server 2005 Reporting Services report into a SQL Server 2005 database, in order to perform some checks on reports displayed to users. Is there an easy way to achieve this? XML would seem appropriate, but I can't find a step-by-step guide on how to achieve this. Any pointers/suggestions would be appreciated.
I have been placed in the position of administering our SQL server 6.5 (Microsoft). Being new to SQL and having some knowledge of databases (used to use Foxpro 2.6 for...DOS!) I am faced with an ever increasing table of incoming call information from our Ascend MAX RAS equipment. This table increases by 900,000 records a month. The previous administrator (no longer available) was using a Visual Foxpro 5 application to archive and remove the data older than 60 days. Unfortunately he left and took with him Visual Fox and all of his project files.
My question is this: Is there an easy way to archive then remove the data older than 60 days from the table? I would like to archive it to a tape drive. We need to maintain this archive for the purposes of searching back through customer calls for IP addresses on certain dates and times. We are an ISP, and occasionally need to give this information to law enforcement agencies. So we cannot just delete it.
Hi I’ve got a simple SQLdataSource which is wired up to a gridView. When the page loads, it’s sometimes likely the data source would contain no data – how can I test for that in the code behind (no data)? I’d like to only show a button if there is data? I’m using ASP.NET 2.0 with C# Many thanksRichard
Can someone advise me as to the best way to loop through a collection of columns for each row in a table testing the value of each column for nulls etc and then entering the column name into another column where the condition is true..
This is the code I'm using to insert data into a table with data from one table, but changing a single column to prevent duplicates and to increase the number of rows. I know I'm missing something easy, but... what is it?
thx,
Kat
code to create the table:
CREATE TABLE [OrdDetails] ( [OrderID] [int] NOT NULL , [ProductID] [int] NOT NULL , [UnitPrice] [money] NOT NULL , [Quantity] [smallint] NOT NULL , [Discount] [real] NOT NULL , CONSTRAINT [PK_OrdDetails] PRIMARY KEY CLUSTERED ( [OrderID], [ProductID] ) ON [PRIMARY] ) ON [PRIMARY] GO
I inserted all the data from the [Order Details] table, now I'm trying to reinsert the data but changing the PK constraint to prevent error. Seems like this should work.
truncate table OrdDetails insert into OrdDetails select * from [Order Details]
/*note, table has 2155 rows to start*/ Declare @CTR INT, @CTR2 INT SELECT @CTR = 0 SELECT @CTR2 = 0 WHILE @CTR < 3000 and @CTR2 < 3000 BEGIN SELECT @CTR = @CTR + 1 SELECT @CTR2 = @CTR2 + 1 INSERT ORDDETAILS SELECT @CTR, (ProductID + @CTR2) , UnitPrice, Quantity, Discount FROM [Order Details] END
I still get primary key errors about not being able to insert dups, this is probably easy, but I'm missing something, help!
I am looking for a number of concept hierarchies (i.e. hierarchy with is-a relationships) describing the same domain that are accompanied with the instances from which the concept hierarchy was obtained. I have been trying to find some data suitable for representing the extracted knowledge in form of concept hierarchies so that can obtain at least two concept hierarchies from the same domain that differ slightly. In this sense the data needs to have multiple class (target) values and there has to be a hierarchical relationship between the classes. I have looked at the datasets publically available from 'uci' website and some others, but so far it appears that only the Zoo dataset is suitable to be represented in a form of concept hierarchy but it is still closer to a decision tree. I have seen some bird-domain or more specifc animal kingdom hierarchies but I cannot find those more specific datasets. Does anyone know where some concept hierarchies with instances can be obtained or datasets with multiple hierarchically related class values? Any help would be greatly appreciated.
I just recently added 30MB of SQL Server database space on my Share hosting account.
I want to put the SQL Web Data Administrator on the server but it is an MSI file and I cannot figure out how to install it.
Also, I will be testing my .Net pages on my local machine. How do I go about it without accessing the SQL Server on my host? I used MS Access before and I have a copy of both databases on my local machine and on the server. I'm thinking of using MSDE on my machine and I just change the connection string when I upload my code. Is this a good idea or is there a better alternative?
I'm working in a bank and I'm creating a report for my company. I just started learning SQL last month. I hope this forum would help me. Here's my problem:
I have date for the last 6 months. I want to filter current quarter and last quarter data. This means, on the next quarter, the data will automatically moves forward. Is that possible?
TASK: At my work we want to categorize and summarize all our IIS web logs and make statistics from it and such. What I need to do is take the browser type from a certain column in the table. All the information is stored in 1 column, and I figure an instr function would be best to do this. I am new to SQL, so I was told to look up the cursor function. In summary, I want to take all the IIS data and match it up against a defined table and then have a sum function for each browser.
Here are some examples of what the column data looks like: (found in the [csMethod] column
I made a define table which lists an ID (primary key) and instr to search for as well as the full browser name. (define.browser)
DEFINE.BROWSER
ID# INSTR# BROWSER NAME ############################### 1___Opera+7_______Opera 7 2___Opera/9_______Opera 9 3___Safari/_______Safari 4___Firefox/1.0___Mozilla Firefox 1.0 5___Firefox/1.5___Mozilla Firefox 1.5 6___Firefox/2.0___Mozilla Firefox 2.0 7___MSIE+5.5______Microsoft Internet Explorer 5.5 8___MSIE+5________Microsoft Internet Explorer 5 9___MSIE+6________Microsoft Internet Explorer 6 10___MSIE+7________Microsoft Internet Explorer 7 11_________________OTHER BROWSER
I am having problems getting a cursor to work. Are there any good tutorials out there, or can anyone be of assistance. Thank you in advance.
Using symmetric keys and certificates in SQL2005, can one assign users permission to only decrypt or encrypt data?
Reason would be say data capturer and data reader type roles. I tried to create some with the GRANT CONTROL and GRANT VIEW for certificates and definitions on Symmetric keys, but havent been to successfull.
Would be great if someone here can offer some advise on it, and if it's possible using SQL rights.
Is it possible in SQL Server 2005 to limit the number of processors used? For cost reasons, we are consolidating servers and want to start running SQL Server 2005 on one of our dual-processor Win2K3 machines instead of the standalone machine it's currently running on. Because we have about 75 users, it's only cost effective to purchase a processor license (vs. a server license with CALs). But right now we only need and can only afford a single processor license, not two. So...
Is there any way in 2005 to limit the number of processors used so that we only need to purchase one processor license? I know in 2000 you could set this on the "Processor" tab of the "SQL Server Properties" dialog. In 2005, is this accomplished by unchecking the "Processor Affinity" and "I/O Affinity" checkboxes for processor #2 on the "Processors" page of the "Server Properties" dialog? If I uncheck these two options does that fully disable SQL Server 2005 from accessing the second processor in any way? From things I've read I can't tell if it restricts access to the second processor completely or if it just places some limitations on the ways it accesses the second processor. Also, the licensing information for SQL Server 2005 leads me to believe that if you are going down the "processor licensing" route that you have to buy a processor license for every processor that the OS itself has access to and not just what processors SQL Server has access to. I thought I understood that in SQL Server 2000 the licensing information did allow you to buy a processor license just for each processor that SQL Server 2000 had access to, but has that changed for 2005?
Hope someone can provide some clarification on limiting processor access and the licensing implications for SQL Server 2005.
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
--Example Schema posted at end of message:---For reporting purposes, I need to build a single comma delimited string ofvalues from the "many" tableof a database.The easiest way to see what I want is to look at the sample after mysignature.By the way, this is actually a busines problem, not homework! I justcreated a simpleexample using class and persons because everyone is familiar with thatrelationship.I have two tables on the 'one' side of the relationship: PERSON and CLASSThe ENROLLMENT table resolves the many to many relationship between PERSONand CLASS.(I know that a real system would be date effective, etc, but this is just asimple example.that will show my problem). ENROLLMENT has one row for each Class in which aPerson is enrolled.Look at the sample report: I have to "flatten" the join result and listthe class titles in acomma delimited string. I am stuck with this reporting requirement, and Iam NOT going to denormalizethe tables.One way to accomplish the result is to use a cursor to step through the rowsand build a "Classes"string with concatenation. I don't much like this option. I am not writingthe front end code,but I want to make it easy for the developer. Ideally, I would like to givehim a flattened viewso he can just do a simple join and run his report.I believe that what I want cannot be accomplished with ANSI SQL. However,does MS SQL have someextensions that could help me do the job? Failing that, how could I write astored procedure that wouldreturn the personID and the "Classes" string in a format that would bejoinable to the other tables?Thanks,Bill MacLeanP.S. Some people like to see actual database scripts as samples instead ofa textual representation.I have pasted in a script that creates sample tables and populates them.--Sample Tables and Reports:TABLE PERSONPersonID LastNM FirstNM--------- ----------- ---------1 Smith John2 Jones Sara3 Smith LucilleTABLE CLASSClassID ClassNM----------- ------------------10 SQL Server 10120 C++25 Object Oriented Design40 Inorganic Chemistry50 Organic Chemistry80 Early Lit.TABLE ENROLLMENTPersonID ClassID-------- ---------1 102 101 401 803 203 25SAMPLE REPORTPerson ID Name Classes--------- ----------------------- -----------------------------------------------------------------1 Smith, John SQL Server 101, Inorganic Chemistry,Early Lit.2 Jones, Sara SQL Server 1013 Smith, Lucille C++, Object Oriented Design/************************************************** ********SQL Server Script/************************************************** ********/CREATE TABLE [dbo].[CLASS] ([ClassID] [int] NOT NULL ,[ClassNM] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL) ON [PRIMARY]GOCREATE TABLE [dbo].[ENROLLMENT] ([PersonID] [int] NOT NULL ,[ClassID] [int] NOT NULL) ON [PRIMARY]GOCREATE TABLE [dbo].[PERSON] ([PersonID] [int] NOT NULL ,[LastNM] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[FirstNM] [varchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL) ON [PRIMARY]GOALTER TABLE [dbo].[CLASS] WITH NOCHECK ADDCONSTRAINT [PK_CLASS] PRIMARY KEY CLUSTERED([ClassID]) ON [PRIMARY]GOALTER TABLE [dbo].[ENROLLMENT] WITH NOCHECK ADDCONSTRAINT [PK_ENROLLMENT] PRIMARY KEY CLUSTERED([PersonID],[ClassID]) ON [PRIMARY]GOALTER TABLE [dbo].[PERSON] WITH NOCHECK ADDCONSTRAINT [PK_PERSON] PRIMARY KEY CLUSTERED([PersonID]) ON [PRIMARY]GOALTER TABLE [dbo].[ENROLLMENT] ADDCONSTRAINT [FK_ENROLLMENT_CLASS] FOREIGN KEY([ClassID]) REFERENCES [dbo].[CLASS] ([ClassID]),CONSTRAINT [FK_ENROLLMENT_PERSON] FOREIGN KEY([PersonID]) REFERENCES [dbo].[PERSON] ([PersonID])GO-- Insert rwo for each CLASSINSERT INTO CLASS VALUES (10,'SQL Server 101');INSERT INTO CLASS VALUES (20,'C++');INSERT INTO CLASS VALUES (25,'Object Oriented Design');INSERT INTO CLASS VALUES (40,'Inorganic Chemistry');INSERT INTO CLASS VALUES (50,'Organic Chemistry');INSERT INTO CLASS VALUES (80,'Early Lit.');-- Insert row for each PERSONINSERT INTO PERSON VALUES (1, 'Smitn','John');INSERT INTO PERSON VALUES (2, 'Jones','Sara');INSERT INTO PERSON VALUES (3, 'Smith','Lucille');--Insert row for each ENROLLMENTINSERT INTO ENROLLMENT VALUES (1,10);INSERT INTO ENROLLMENT VALUES (1,40);INSERT INTO ENROLLMENT VALUES (1,80);INSERT INTO ENROLLMENT VALUES (2,10);INSERT INTO ENROLLMENT VALUES (3,20);INSERT INTO ENROLLMENT VALUES (3,25);
We are running Reporting Services 2008 R2 on a Windows Server 2008 Standard 64-Bit server. I have a user that has full access to Reporting Services at all folder levels but IS NOT a local administrator on the 2008 server.
This user can create data source connections but when he tries to test the connection by clicking on the 'Test Connection' button, he gets the following error "The permissions granted to user <username> are insufficient for performing this operation.A user that has administrator priveleges on the server can test the connection fine.
I don't want to make this user an administrator on the server.
This is an extract from the log file:
ibrary!ReportServer_0-24!3478!08/16/2011-13:45:37:: Call to TestConnectForDataSourceDefinitionAction(). library!ReportServer_0-24!3478!08/16/2011-13:45:37:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: , Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user <username> are insufficient for performing this operation.;
I have a table ComponentPeriod. In it we have the combination of a component (e.g. A,B,C ) and a period (2014 Q1, 2014 Q2, 2014 January etc)I want the periods to be in descending order (2015 Q4, 2015 Dec, 2015 Nov, 2015 Oct, 2015 Q3 ... etc) and so I need to create a sequential number series to allow this to happen (as we can only order in the client tools by a single column - and so I guess the technique I'm looking for is used a lot to produce these types of "order by" columns)
Which was fine when I was referring to a table where Periods where distinct directly but now I have denormalised this for ComponentPeriod so I need something a little more sophisticated Whats the best way to get a sequence with perhaps some partitions in across a subset of distinct columns (I guess from SUMMARIZE or similar)
even though there may be multiple records in ComponentPeriod that have the period 2015 Q4, but I want them all to have the value Sequence value of 1? I've got as far as:
Hi! I'm using replication with two database on SQL 2000,when begin, the log files size is 50mb and the data files size is 150mb. But now the log files size is 2Gb and the data files size is 4Gb. I would like to decrease the log files and the data files ??? How do i do this??? (I using Truncate and shrink doesn't change ) Thanks!!!
I created the db with the attached script and I am able to access ituntil I reboot the server. I've tried enabling flag 1807 via the SQLserver service and the startup parameters of the instance. In allcases the database always come up suspect after a reboot. There wasone instance where I was able to recover, but I am not sure how thathappened.Does anyone have an idea of how I can reboot the server without thedatabase becomming suspect?USE MASTERGODBCC TRACEON(1807)GO--DBCC TRACEOFF(1807)--DBCC TRACESTATUS(1807)GOCREATE DATABASE ReadyNAS ON( NAME = ReadyNAS_Data,FILENAME = '\NAS1NASDiskSQL ServerReadyNASReadyNAS_Data.mdf',SIZE = 100MB,MAXSIZE = 20GB,FILEGROWTH = 20MB)LOG ON ( NAME = ReadyNAS_Log,FILENAME = '\NAS1NASDiskSQL ServerReadyNASReadyNAS_Log.ldf',SIZE = 20MB,MAXSIZE = 100MB,FILEGROWTH = 10MB)
Hello all. Before my arrival at my current employer, our consultantsphysically set up our MSSQL 7 server as follows:drive c: contains the mssql enginedrive d: contains the transaction logdrive e: contains the data filesNo filegroups were set up and the data files consist of only 1 largephysical file. Currently, our data file is >10GB. When I was trained onthe physical aspects of sqlserver, I was told to never create physical files[color=blue]> 2048MB each. If I did, I could expect inefficient physical storage of[/color]data and slower performance (due to the OS).Our server has 2 RAID-5 arrays. Drive c: and e: are located on the firstarray and drive d: on the second. We're running Windows 4.0 NT Server SP6with NTFS.Can someone comment on the use of 1 single large data file vs. more smallerdata files?
I have an MVC asp.net application that stores many records in a table on sql server, in its own system. used the system for 2 months, worked fine accessing, changing data.
Now that other users are logging in? there is cross coupling going on. one user gets the data from another users sql search.
In the mvc app it had used the get async method to read the ID record from the db, i set that to synchronous. no effect; the user makes their own login id but that does nt matter either.
i am really in need of help. i have a text file consiting of some data.i want to update my database from that text file periodically say 12 hours.the text file is being updated by another server program in every 12 hours can any one help me in this case? i am lost for this scenario?? help me please.....
Hi,I have(had) an old Win2k Server server with about 30 web site databases(SQL 2000) that just went under due to hardware problems. Thankfully, Ihave backups of all the databases plus the MDF and LDF files from thehard drive.I want to move all of these sites and their data to a newer server(Win2003) running SQL2000.What's the best way to copy the database from the old server hard drive(now mounted as an extrnal drive to a local machine; I'm currentlyFTPing all of the web site directories from it to the new server)?Just upload the original data to the new server and then mount the MDFand LDF files within the new SQL server? Or do I restore the backupfiles in the new SQL2000?All of my previous data migrations have been DTS operations from onelive server to another, so no experience with either of the abovescenarios. I'll certainly have a lot more experience at one of them bythe time this weekend is through.Thanks for any help you can offer.
I have an sql 6.5 server that has had it OS (nt4.0) reinstalled on c:. sql was installed on d: and all of the dat files are still there, i need to recover data from one of these, i tried install on a new machine and creating a device/db with same name and then over writing the file with the old one but enterprise manager shows it as suspect and will not let me work on it. Any assistance would be appreciated.
I have a database with two data files configured under primary filegroup. Both the files are set to grow automatically till the disk space is full and both the data files are in the same drive on same physical disk. When the transactions are happening in the database,how the data is getting distributed on both data files? Will the transactions is inserted in to the primary data file first and once that is full it will start occupying the secondary data file?. How this distribution is happening?
I am installing SQL Server 2005 on our server and have researched with no success on how to redirect the Data Files and Log files to a different drive, e.g. D: drive. In SQL Server 2000 you specified where you wanted to place the LOG files and Data files upon Setup.
I have looked everywhere in SQL Server 2005 install and cannot find where I can tell it to place the Data and Log files. Please point me in the right direction.
hi just wondering if any expert out there can answer my question.
i got a database separate into 3 datafiles in 3 different drives. i will name it A, B, C. A datafile size 30G, B datafile size 15G, C datafile size 15G. and the drive for datafile A is about full. so is there anyway i can more some of data from A datafile to other data file? or since A+B+C =60G can i make it all 20G for each one of them by any command? thanks!!! :)
I have a SQL Server 2000 database that has never been backed up. The SQL Server database can't be logged into due to an SSL Security Error, thus I can't get to the backup utilities within Enterprise Manager.
What data files do I need to backup manually and what steps do I have to take to backup these files to a tape and rebuild the server?
I have the .mdf and .ldf files from an MS SQL 2000 server. I don't have MS SQL server running on my home machine. I'd like to be able to extract the data from the .mdf file to something I could use on my home machine. XML would be good, CSV would be ok too, maybe even some way to import into MySQL. Any help would be appreciated, also if I've some how missed the forum faq, please point me to it.
I'm kind of new to sql server (but experienced in Oracle) and I've got a couple of questions I wanted to bounce off you guys.
I'm implementing a SQL server cluster right now (2 node on Win2K3, shared EMC DASD for databases). We're at the very preliminary phase of this. I did an install and had my resource group set up with all of my disks on it. When prompted for the data file drive, I gave it one, but it put all the tlogs for the 'out of the box' database on that same drive as the data files (i.e. master, model, tempdb, etc.). The doc is a little vague in some of these areas (i.e. it says separate logs and data files on different disks, but then never actually tells you how to do that).
Now, I know how to specify the default paths for data and transaction logs for any NEW database I create and that's not a problem. However, my question is, how do I 'move' the tlogs from the databases created during the install? I've tried a detach, move tlog to separate physical drive and then reattach the db, but whenever I do this, SQL server wants to create a new tlog for the db on the same old drive as the datafile. I also can't delete the original tlog from a particular database even after I've created an additional tlog on another disk.
Any help is much appreciated. I'm more or less looking for the strategy any of you might take to set up this initial phase.
We have a large Database (91 GB) that is currently in one large data file. Now that we have muliple disk arrays I can split that up on I would like to have a couple data files. My question is, what is the best way to split this up? Should I keep one primary file group and just create another file, or should I create a file group for indexes and put those on that? This database is used for reporting only so it doesn't really have any writes being done on it.
We are storing all our SQL 2000 databases on SAN LUNs, and one of our databases currently uses a single 40GB file which is approaching capacity. If we add further files using different LUNs, the data will start being added to these new files. My questions are these: if we were to add a number of new LUNs to this database, is there a way to redistribute the existing data so it is balanced across all files in order to gain the most benefit from having multiple files, rather than just dispersing the additional fragments across the new files? Will the optimise feature of the maintenance plan do this automatically during the index rebuilds? Is it better to add more files to the PRIMARY filegroup, or add a number of filegroups with single files in each? We aren't looking to use filegroups for fiddling with our backups by the way. Many thanks for any recommendations offered.