SQL Server 2008 :: Create Test Portfolio Data Summing To 100%?
Feb 5, 2015
I'm building a proc to generate fake stock portfolios for testing. I have a list of thousands of symbols, and I want the tester to be able to select how many symbols they want in their fake portfolio, and then give each symbol a random weighting (i.e. percentage held in that security) which, across all the symbols, sums to 100%. The securities here are not the part I care about, it's the weightings summing to 100 that's important.
So test data would look something like this:
/*
--This is the repository of potential symbols I can add to a fake portfolio.
-- So the simple part is basically select top (@symbolCt) from #PossibleSymbols, plus some magic I have yet to determine
if object_id('tempdb.dbo.#PossibleSymbols') is not null drop table #PossibleSymbols
create table #PossibleSymbols
(
SymbolID int
)
insert into #PossibleSymbols (SymbolID)
I've been tasked to generate some test data (a few thousand rows) into a new table in a new database. This database is a whole new idea, so I can't write a query to pull pieces of data from other databases. I cannot consider any third party tools, such as what Redgate or Idera has to offer. I can't consider free tools such as what I've found on GitHub. I've been instructed to restrict myself to Visual Studio 2013 and whatever I can get that works within that.
I am looking for a sql code snippet which read data from below table
UserId username contact 1 Anil 111 2 Sunil 222
and insert data to below table with some test data appending sequence number 1,2,3 for only City and Email. Both are different tables and does not have any referencial integrity.No of records inserted for user is configurable for example count = 3
CREATE TABLE #tblTemplateBlocks ( TemplateID int, BlockID int, OrderID int
[Code] ....
I have a table called TemplateBlocks which contains which Blocks are on a Template. In this example - just one template - with three Blocks.
Table tblFields contains a list of Fields that are on each TemplateID/BlockID. In this example there are 3 fields on each TemplateID/BlockID pair.
Before I can use a template, I have to check that, in tblFields, for each Template/BlockID pairing - one of the fields must be set as the Stage Base (I cannot have 2 fields as StageBase or no fields as StageBase). In the example data above, the data would be okay as each Template/BlockID pairing has one row where StageBase is true.
Having checked that each Template/BlockID pairing has a StageBase, I need to check that each row where StageBase is true has a value for the WeekStart column and that, taking into account the order of the Blocks in tblTemplateBlocks, the values in WeekStart for each TemplateID/BlockID pairing are getting progressively bigger.
So, for example, the example data above would fail because the third TemplateID/BlockID pairing has no value for the WeekStart column in the row where StageBase is true.
If I added a value of 2 for WeekStart in the row for the third TemplateID/Block that has a StageBase of true - again the data would fail because, taking into account the order of the Blocks - the values for WeekStart would be 0,3,2 and these numbers need to increase.
0,3,4 would be fine. 0,3,10 would be fine. 0,3,3 would fail.
I can do this easily using a cursor or two - but how to do this without cursors.
Can someone assist? I'm using the applciation Project Portfolio Server 2006. Have reporting services setup and when I try to create a custom report this is the error I receive:
The report server cannot process the report. The data source connection information has been deleted. More Info: http://go.microsoft.com/fwlink/?LinkId=20476&EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&EvtID=rsInvalidDataSourceReference&ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&ProdVer=9.00.2047.00
Also does anyone know of a good location for information on troubleshooting Project Portfolio Server 2006? Can't find much good documentation.
I am working towards automating the process of testing our backups. For the meantime, I do it all manually - I copy the backup files (full + transaction logs) to our test server and then run the restore script. Once database restored I run the DBCC CheckDB. The results of checkdb I manually upload to our Sharepoint portal as proof that the backup file is intact with no errors.
here are some ideas I have but have not yet tested:
Create a maintenance plan with each 3 jobs:
--> Powershell script to copy the files from Prod server to Test server - add this scrip to Job1 --> Powershell script to restore databases files - add this script to Job2 --> Run the DBCC in powershell (yet to find if possible in PS) - add this script to Job3
I would like to use seperate jobs as to get a report on the duration and status of each job
Would also like to get the results of the DBCC Checkdb as proof that no errors were found for upload to our Sharepoint portal. Dont know if possible via the job.
I use SQL Server 2005 Dev Edition and am not new to making databases (then again, I've had enough experience and my dad does the same thing).
I am (unfortunately) a university student and for my dissertation I am going to produce a SQL Server database with a strong emphasis on data mining.
Obviously, for the data mining to be useful at all I need to produce loads and loads of test data.
Fair enough, and there are applications which do this, such as EMS Data Gen, but can anyone recommend me any other data gen utilities? EMS Data Gen has poor handling of unique attributes, and as I am doing a car manufacturer this will give me problems when I come to the registration number attribute.
Also, why are utilities for SQL Server (and Oracle at that) so expensive? This makes it out of my reach and makes it difficult to build a truly good database that will net me good marks, and demotivates me. :(
Lastly, please feel free to recommend to me any utilities for SQL Server - such as performance monitors, backup utilities. Anything. But if they are priced utilities, they have to be sensibly priced (<£100), because I cannot yet afford to pay >£1k on such utiltiies.
Hi everybody Have you ever noticed that you can create database with strange and unusual name with Enterprise manager but not with Query Analizer and through T-SQL code!!?
for example try to create database with name &%Test$ it will be created as i said earlier throgh Database Wizard in Enterprise manager but if you Execute :
Create Database &%Test$
you will receive the following error: Server: Msg 170, Level 15, State 1, Line 1 Line 1: Incorrect syntax near '&'.
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
I have a database on the server which is in production and I would like to create another copy and use it for testing purpose. so the application can point to the test database for testing purpose.
What is the best way to do? I guess I have to name the test one with different name right?
can I do it without detach the production one? or just copy the database - tables structures from the currnet one?
We have a production database that was generated by a vendor. The vendor wants us to test a new version of their software. This testing process will take several months. The users want the testing to be as real time as possible. I have developed a series of scripts that will back up our databases and ship them over to our test environment on a nightly basis. We also of course have nightly backups. As a general rule, we do full backups once a week and differentials on a nightly basis.
We are a phone company that has transactions being applied to the database 7 X 24.
My question is this: Is there a way (an option or something) that when my backup of the production database which is destined for the test environment runs, I can tell it to not set the flags that indicate a backup has been done. What I want to avoid is the differential backup process from being 'Confused' about what backup it is doing a differential for.
I would appreciate any help or insight you can give me.
We have both a production SQL 7 server, QA, and Development. From time to time, I want to move just the data from the production server to the other 2 servers without modifing the objects that may have been changed such as stored procedures and rights. Is there a way using the SQL tools provided that we can just move the data. Becuase also what happens is that the rights to the objects change which means my developers no longer have access to the tables for selects in QA since the changes where overwritten by production where they do not have the rights.
I am using a SQL server 2008 database.I am working on a windows user.I noticed when I create a user in a database for a login, even that login does not exist in the database, the user is still successful created.I suppose it should error out.
Or I miss something?
I am using this script: IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = N'mydomainmyuser') CREATE USER [mydomainmyuser] FOR LOGIN [mydomainmyuser] WITH DEFAULT_SCHEMA=[dbo] GO
Can we create the Partition on Existing Table?e.g Create table t ( col1 number(10,0), Col2 Varchar(10)) ;After the table Creation can we alter the table to partition the table.
We have two databases with same schema and tables (same table names, basically main DB and a copy of the main DB). following is example of table names from 2 DBs.
CREATE TABLE #SourceDatabase (SourceColumn1 VARCHAR(50)) INSERT INTO #SourceDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6') SELECT * FROM #SourceDatabase DROP TABLE #SourceDatabase CREATE TABLE #ArchiveDatabase (SourceColumn2 VARCHAR(50)) INSERT INTO #ArchiveDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6') SELECT * FROM #ArchiveDatabase DROP TABLE #ArchiveDatabase
We need a T_SQL statement that can create one view for each table from both the databases(assuming both databases have same number of tables and same table names). so that we can run the T_SQL on a thrid database and the third DB has all the views (one view for each table from the 2 DBs). and the name of the view should be same as the tables name. and all 3 DBs are on the same server.
the 2 temp tables are just examples, DBs have around 1700 tables each. so we ned something like following for each table.
CREATE VIEW DBO.TABLE1 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE1] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE1] CREATE VIEW DBO.TABLE2 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE2] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE2] CREATE VIEW DBO.TABLE3 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE3] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE3] CREATE VIEW DBO.TABLE4 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE4] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE4] CREATE VIEW DBO.TABLE5 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE5] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE5] CREATE VIEW DBO.TABLE6 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE6] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE6]
I've built my SQL Server Express database with SQL Serevr Management Studio Express, and now I want to enter some seed data to assist in building tha app around it. I cannot find an option to manage the data in SQL SMSX, like I used to with Enterprise Manager. I don't want to have to write an app just to get test data in. Seems like this should be a common need. Am I missing something obvious here? Can't find any reference to this in a search of the forums. Please help.
I am running a script by the end of the day. What I need is the rows in my temp table get saved in a permanent table.
The name of the table should end with the current date at the end.
Declare @tab varchar(100) set @tab = 'MPOG_Research..ACRC_427_' + CONVERT(CHAR(10), GETDATE(), 112 ) IF object_id(@tab ) IS NOT NULL DROP TABLE '@tab'; Select * INTO @tab from #acrc427;
I am splitting data from SQL table and sending it to excel file but everytime i rerun the package ,it appends the existing data in excel file ..I tried using execute sql task with excel connection and write drop table `tablename` and then one more execute sql task with create table `tablename` (`Id` int ,`fname` varchar(100)) ....But it does not seem to work.
I have a closed polygon that coincidently is in the shape of Iowa :) I have a point that is within the state and a point WELL outside it, but I get weird results that I don't expect when I try to get it to tell me that the point is within the polygon. Here is some basic code, with long coordinates data.
(1 row(s) affected)As I read that there is a distance of about 7864 meters, this is close to what I would expect, so that's ok. The point outside I would expect a distance as well so that is confusing.. Then we have the intersects, it says that the point inside does NOT intersect but the one outside DOES, this is backed up by the intersection values.
I'm, using my script to many location to create folder to save output files and if the folder is removed/not present it can create it without any noise. But the problem is, while I use the same sort of script to check if a folder is present in the sharedpath it will not create it to copy all bkp files from local to remote path works good, but if you delete the folder or rename the exisitng folder and if the below script tries to create the folder it created as "fILE", very interesting. Per IT team they have given SQL Server account the full rights to create/delete/alter folder/files.
Do I need to use seperate script or way to create / alter folders in the sharepath?
I am asked to create 100 procedures to a database. Any best way to create them in a database one by one by calling the files and saving the execution output files in a folder?
We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.
I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?
select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;
I have a record in an Excel format (Excel 2010) and I would like to bulk import that into SQL Server 2008 and also while importing, SQL Server will automatically create a new table based on the header fields or row of the source file.
I am not sure if SQL Server 2008 has this capabilities.
I need to create a function that replaces the data in a column with an 'X' based on the LEN of the data in the column. I created one that does a replacement, but it fills the column based on the max data length, and not the current length of the string or integer. An example of what I'm trying to accomplish.
Original data in a varchar(30) column: thisisavalue thisisanothervalue thisisanothervalueagain shortval
replaced with xxxxxxxxxx xxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxx xxxxxxx
My current function is replacing the data like this: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Is there any tool available to migrate the data from the SQL Server test database to SQL Server production database. Data Migration should be based on a condition which can be given as an input for a table by the user. The dependant tables also should be migrated based on the given condition. i.e data subsetting based on the matching conditions.
Ex : Salary > 2000
The rows of the table which matches the condition alone need to be migrated for the corresponding table. Also its dependant table's rows should be migrated based on the given condition. Please help me with a tool which can automate this.
I've been struggling with this for some time. we have to group data based on Patients admission date and discharge date. If any Patients discharge date + 1 = admission date then we have group both rows into one row and sum costs from both the rows. Please check out the sample input and expected output for details.
We are setting up a test lab environment with 100 machines. We want one master testing db that gets replicated to each to run scripted application tests nightly.
My goal is to minimize the amount of work to move this thing to each of the 100 test machines. I am wondering if we need to even have the sql local and invest in a monster db server with 100 copies of the db we restore and each test machine point to their own db on that server, or if I should use db mirroring or something to get the master test db to each of those machines instead.