I will be receiving data from a company called Loan Performance, that has one file/table that will hold 1 billion rows. They send data by period, and I plan to load the data via BCP via NT/DOS scripts. The 1 billion rows represents data for 200+ periods.Are the following design plans feasible1. Partition table by period value, I'm not sure of the max number of partitions per table in 2005, but I think we have periods data back to 1992 and a new one gets created every month, so the possibility of having > 1000 partitions exists. I plan on just pre-creating partitions for future data, instead of dynamically creating when a new period is sent.2. Load data via BCP in DOS shell scripts that will drop index (by partition), BCP in data, and they re-create indexes by partition, is this possible ? and will I see a performance increase as opposed to one huge table (I'm pretty much sure I will). There is usually one periods data present per day, but sometimes the vendor resends all data (would get loaded on weekend).I'm a bit unsure of where to start being I never worked with this amount of data. I worked with partitioning in Oracle a long time ago.I plan on having an 2XQuadCore 2.66Ghz CPU with 32GB of RAM and SQL2005EE 64Bit connected to 1 Terabyte SAN Disk.Thanks all,PMA
Hi there ; I've a photo viewer which shows the image based on a datetime QueryString. I've a couple of buttons like next, prev, last, .... . in the "prev" for instance, i want to select the image with a datetime that is lower than the current image datetime, and is the biggest one of course.but i'm stuck in it! By the way i use .net 2.0 & Sql 05 Express thanks in advance
Hi guys, Could anyone help me to write sql query to select only three months record from database if the cur month is march then i need jan, feb and mar data , if it is jan, then i need nov , dec and jan data...how can i write sql query for this, i need this for creating reports and also how cal i write queries based on days...mean to say that i need to select all records of particular month that falls on monday or tuesday... Help me out. Thanks in advance
HELP. I've inherited some less-than-stellar dessign.
I have a table: tbMD
ObjectID uniqueidentifier not null Tag nvarchar(50) not null Data nvarchar(400) not null
This table has 1.6Billion records and has filled up the drive (nearlly 1 TB in the DB, with about 600GB in this one table).
I need to move this table to another drive with ZERO down time.
I have created a new filegroup on a second drive array and created a new table (tbMD_TEMP) and my thought was to copy all the data from tbMD into tbMD_Temp then drop tbMD, and rename tbMD_Temp.
So, anyone have any great ideas on how to get this done very quickly?
After moving our database between servers, one table had a hiccup in it's identity column. The number jumped from 761 to 1886863475 on the next insert. This is a production server and I didn't catch it until yesterday. So I have several days worth of numbers in a central table. I can't clean them out, but I wondered if I could do the following:
DBCC CHECKIDENT ('table_name', RESEED, 800)
I tested it on our development server and it doesn't seem to cause any problems. I know that when we get to 1.8 billion on the identity column again we'll get an error, but I'm willing to risk it.
A few questions. 1) Am I insane for trying this? 2) Has anyone else ran into a similar problem and what did you do to fix it? 3) If I do this, is there anything in particular to watch out for? 4) What caused the jump in the first place?
My other option is to change the datatype from int to decimal(19) or something along those lines. This will upset our programmers and I'll have to change all my foreign keys (not a big deal, just a pain).
hi, I want to Display the image records of each employee of having 10 images of each.the number of records are more then one billion....... please provid me the optimized query that can i used in SP and display in Gridview.
So I just got an email from Production Support saying an hour and a half downtime is unacceptable to move a half billion rows between 2 partitions because I am moving a clustered index and space is a consideration.
I can not use partition switching because the clustered index is changing.
This is what I am doing...
1. I am creating a new table with the new cluster on a new partiton 2. I am moving the records in 5K set based batches by doing a range search on the existing clustered index on the existing table. 3. I then reapply all of the nonclustered index from the original table to the new one. 4. I do a sp_rename swap out.
The same way I have done this many times before. Is there some new secret special sauce (other than partition switching) I can use?
Hi everyone. I hope that my question isn't too broad, but here it goes...
I am trying to figure out the best way to scale a SQL Server database so that it can handle a billion simultaneous users querying the same tables, and can easily scale to handle many billions of simultaneous users. The database must also have 999.99% availability. The number of licences and amount of hardware needed is not an issue.
I am using SQL Server 2005 and trying to create a linked server on Oracle 10. I used the commands below: EXEC sp_addlinkedserver @server = 'test1', @srvproduct = 'Oracle', @provider = 'MSDAORA', @datasrc = 'testsource' exec sp_addlinkedsrvlogin @rmtsrvname = 'test1', @useself = 'false', @rmtuser='sp', @rmtpassword='sp'
When I execute select * from test1...COUNTRY I get the error. "The OLE DB provider "MSDAORA" for linked server "...." does not contain the table "COUNTRY". The table either does not exist or the current user does not have permissions on that table." The 'sp' user I am connecting is the owner of the table. What could be the problem ? Thanks a lot.
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
Hi, i have a table with all employee bio-data in a completed project. Iam now working on another project with a table that needs the same data and here iam talking about 300 records that rarely change. Instead of re-entering this data in this new table, i want to import the data from the completed project into a table in this new project. Does any one have any idea how to achieve that or is there a better option to do the same. One more thing iam realising here is that iam going to use this same data in very many applications and some one from one department is going to enter this data in all these different applications. so i was wondering if there could be a way of having a central database that this guy can mantain and then i be able to use that table data in different applications that iam going to develop. I dont want to kill this unlucky guy with data entry tasks every time i deploy a new application. So basically, how can make one database application sever several different applications with its data.
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
I'm trying to run a query on a SQL Server 2005 table which has a WHERE clause that requires a query from my SQL Compact table.
SELECT * from RemoteDB.TESTDB.dbo.Objects
WHERE Last_Updated > '2008-05-21 10:51:00'
AND Object_PARENT IN (select Object_CODE from LocalDB.PDADB.dbo.Objects)
Basicallly on a linked system, this query would find all new objects in my main database where the same objects exist in my local database. This would work just perfectly, no problems.
Now, the local database is actually on a PDA running SQL Server Compact Edition. There is currently no support for creating a linked environment. I have the option of pulling the table off the local db and pushing it to the remote db and then running the above query from within the single db and then retrieving the list of new entries and pulling them down to the local db but that is a HUGE amount of bandwidth, even if I just used the single primary key column.
Would anyone maybe have a little advice for me on how I could possibly achieve the above result on SQL Server Compact please?
i am inserting something into the temp table even without creating it before. But this does not give any compilation error. Only when I want to execute the stored procedure I get the error message that there is an invalid temp table. Should this not result in a compilation error rather during the execution time.?
--create the procedure and insert into the temp table without creating it. --no compilation error. CREATE PROC testTemp AS BEGIN INSERT INTO #tmp(dt) SELECT GETDATE() END
only on calling the proc does this give an execution error
I created am inventory table with few columns say, Servername, version, patching details, etc
I want a tracking of the table.
Let's say people are asked to modify the base table and I want a complete capture of the details modified and the session of the user ( ) who (system_user) is actually modifying the details.
Hi... I was hoping if someone could share me some thoughts with the issue that I am having at the moment.
Problem: When I run the package in my local machine and update local SS DB/table - new records writes OK in the table. BUT when I changed my destination meaning write record into another physical SS DB/table there is no INSERT data occurs. AND SO when I move/copy over that same package into another server (e.g. server that do not write record earlier) and run it locally IT WORKS fine too.
What I am trying to do is very simple - Add new records in a SS table using SSIS . I only care for new rows and not even changed rows. Here is my logic - 1. Create Ole DB source to RemoteSERVER - using SELECT stmt 2. I have LoopUp component that will look for NEW records - Directs all rows that don't find match and redirect rows (error output). 3. Since I don't care for any rows that is matched in my lookup - I do nothing or I trash the rows 4. I send the error rows (NEW rows) into OleDB destination
RESULTS when I run the package locally and destination table is also local - WORKS FINE; But when I run the package locally and destination table is in another Sserver (remote) - now rows is written.
The package is run thru BIDS manually so there is no sucurity restrictions attached to it.
I am not sure what I am missing. And I do not see error in my package either. It is not failing.
Hopefully I am posting this question in the correct forum. I am still learning about SQL 2005. Here is my issue. I have an access db that I archive weekly into and SQL server table. I have used the dst wizard to create an import job and initally that worked fine. field I have as the primary key in the access db cannot be the primary key in the sql table since I archive weekly and that primary key field will be imported several time over. I overcame this initally by not having a primary key in the sql table. This table is strictly for reference. However, now I need to setup a unique field for each of the records in the sql table. What I have done so far is create a recordID field in the sql table that is an int and set as yes to Identify (auotnumber). That worked great and created unique id for all existing records. The problem now is on the import. When I try to import the access table i am getting an error because of the extra field in the sql table, and the error is saying cannot import null value into this field. So... my final question is how can I import the access table into the sql table with one extra field which is the autonumber unique field? Thanks a bunch for any asistance.
I have nine type of buttons, EnrollAmtBTM PlacAmtBTM and so on, I also have a SQL setver view V_Payment_Amount_List from here i need to display the data on the button this is the select value to display when i choose the agency list and the amount corresponding to that agency_ID is displayed here the agency_ID is fetched from the SQL CONDITION THIS IS WHERE I GET FETCH AGENCY DATA WHEN SELECTED i.e SQL CONDITIONprotected void CollectAgencyInformation() { WebLibraryClass ConnectionFinanceDB;ConnectionFinanceDB = new WebLibraryClass(); string SQLCONDITION = "";string RUN_SQLCONDITION = ""; SessionValues ValueSelected = null;int CollectionCount = 0;if (Session[Session_UserSPersonalData] == null) {ValueSelected = new SessionValues(); Session.Add(Session_UserSPersonalData, ValueSelected); } else { ValueSelected = (SessionValues)(Session[Session_UserSPersonalData]); }ProcPaymBTM.Visible = false;PaymenLstBTN.Visible = false; Dataviewlisting.ActiveViewIndex = 0;TreeNode SelectedNode = new TreeNode(); SelectedNode = AgencyTree.SelectedNode; SelectedAgency = SelectedNode.Value.ToString(); Agencytxt.Text = SelectedAgency; Agencytxt2.Text = SelectedAgency; Agencytxt3.Text = SelectedAgency;DbDataReader CollectingDataSelected = null; try {CollectingDataSelected = ConnectionFinanceDB.CollectedFinaceData("SELECT DISTINCT AGENCY_ID FROM dbo.AIMS_AGENCY where Program = '" + SelectedAgency + "'"); } catch { }DataTable TableSet = new DataTable(); TableSet.Load(CollectingDataSelected, LoadOption.OverwriteChanges);int IndexingValues = 0;foreach (DataRow DataCollectedRow in TableSet.Rows) {if (IndexingValues == 0) {SQLCONDITION = "where (Project_ID = '" + DataCollectedRow["AGENCY_ID"].ToString().Trim() + "'"; } else {SQLCONDITION = SQLCONDITION + " OR Project_ID = '" + DataCollectedRow["AGENCY_ID"].ToString().Trim() + "'"; } IndexingValues += 1; }SQLCONDITION = SQLCONDITION + ")"; ConnectionFinanceDB.DisconnectToDatabase();if (Dataviewlisting.ActiveViewIndex == 0) { Dataviewlisting.ActiveViewIndex += 1; } else { Dataviewlisting.ActiveViewIndex = 0; } SelectedAgency = SQLCONDITION; ValueSelected.CONDITION = SelectedAgency;
???? this is where i use to get count where in other buttons and are displayed.... but i changed the query to display only the Payment_Amount_Budgeted respective to the agency selected. from the viewRUN_SQLCONDITION = "SELECT Payment_Amount_Budgeted FROM dbo.V_Payment_Amount_List " + SQLCONDITION; try { CollectionCount = ConnectionFinanceDB.CollectedFinaceDataCount(RUN_SQLCONDITION); EnrollAmtBTM.Text = CollectionCount.ToString(); } catch { }////this is my CollectedFinaceDataCount-- where fuction counts the records in the above select statement if i use for eg. "SELECT Count(Placement_Retention_ID) FROM dbo.V_Retention_6_Month_Finance_Payment_List" here is the functionpublic int CollectedFinaceDataCount(String SQLStatement) {int DataCollection; DataCollection = 0; try { SQLCommandExe = FinanceConnection.CreateCommand(); SQLCommandExe.CommandType = CommandType.Text; SQLCommandExe.CommandText = SQLStatement; ConnectToDatabase();DataCollection = (int) SQLCommandExe.ExecuteScalar(); DisconnectToDatabase(); }catch (Exception ex) {Console.WriteLine("Exception Occurred :{0},{1}", ex.Message, ex.StackTrace.ToString()); } return DataCollection; }
So here mu requirement request is to display only the value fronm the view i have against the agency selected Please help ASAP Thanks Santosh
I imported a SQL Table into SQL DataBase, But I can not update this table even with SQL Server management Studio When I change any data on mentioned table above, Red exclamation sign appears left of the record . How can I correct this problem? Thanks.
Strange one here - I am posting this in both SQL Server and Access forums
Access is telling me it can't append any of the records due to a key violation.
The query:
INSERT INTO dbo_Colors ( NameColorID, Application, Red, Green, Blue ) SELECT Colors_Access.NameColorID, Colors_Access.Application, Colors_Access.Red, Colors_Access.Green, Colors_Access.Blue FROM Colors_Access;
Colors_Access is linked from another MDB and dbo_Colors is linked from SQL Server 2000.
There are no indexes or foreign contraints on the SQL table. I have no relationships on the dbo_ table in my MDB. The query works if I append to another Access table. The datatypes all match between the two tables though the dbo_ tables has two additional fields not refrenced in the query.
I can manually append the records using cut and paste with no problems.
What is the best way to transfer data from the staging table into the main table.
Example: Staging Table Name: TableA_satge (# of rows - millions) Main Table Name: TableA_main (# of rows - billions)
Note: Staging table may have some data same as the main table.
Currently I am doing: - Load data into staging table (TableA_stage) - Remove any duplication of rows from the staging table (TableA_stage) - Disable all indexes on main table (TableA_main) - Insert into main table (TableA_main) from staging table (TableA_stage) - Remove any duplication of rows from the main table using CTE (TableA_main) - Rebuild indexes on main_table (TableA_main)
The problem with the above method is that, it takes a lot of time and log file size grows very big.
I have table 'stores' that has 3 columns (storeid, article, doc), I have a second table 'allstores' that has 3 columns(storeid(always 'ALL'), article, doc). The stores table's storeid column will have a stores id, then will have multiple articles, and docs. The 'allstores' table will have 'all' in the store for every article and doc combination. This table is like the master lookup table for all possible article and doc combinations. The 'stores' table will have the actual article and doc per storeid.
What I am wanting to pull is all article, doc combinations that exist in the 'allstores' table, but do not exist in the 'stores' table, per storeid. So if the article/doc combination exists in the 'allstores' table and in the 'stores' table for storeid of 50 does not use that combination, but store 51 does, I want the output of storeid 50, and what combination does not exist for that storeid. I will try this example:
'allstores' 'Stores' storeid doc article storeid doc article ALL 0010 001 101 0010 001 ALL 0010 002 101 0010 002 ALL 0011 001 102 0011 002 ALL 0011 002
So I want the query to pull the one from 'allstores' that does not exist in 'stores' which in this case would the 3rd record "ALL 0011 001".
I started with an inline table returning function with a hard coded input table name. This works fine, but my boss wants me to generalize the function, to give it in input table parameter. That's where I'm running into problems.
In one forum, someone suggested that an input parameter for a table is possible in 2012, and the example I saw used "sysname" as the parameter type. It didn't like that. I tried "table" for the parameter type. It didn't like that.
The other suggestion was to use dynamic sql, which I assume means I can no longer use an inline function.
This means switching to the multi-line function, which I will if I have to, but those are more tedious.
Any syntax for using the inline function to accomplish this, or am I stuck with multi-line?
A simple example of what I'm trying to do is below:
Create FUNCTION [CSH388102].[fnTest] ( -- Add the parameters for the function here @Source_Tbl sysname ) RETURNS TABLE AS RETURN ( select @Source_Tbl.yr from @Source_Tbl )
Error I get is:
Msg 1087, Level 16, State 1, Procedure fnTest, Line 12 Must declare the table variable "@Source_Tbl".
If I use "table" as the parameter type, it gives me:
Msg 156, Level 15, State 1, Procedure fnTest, Line 4 Incorrect syntax near the keyword 'table'. Msg 137, Level 15, State 2, Procedure fnTest, Line 12 Must declare the scalar variable "@Source_Tbl".
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
I would like to find out how to capture record count for a table in oracle using SSIS and then writing that value in a SQL Server table.
I understand that I can use a variable to accomplish this task. Well first issue I run into is that what import statement do I need to use in the design script section of Script Task. I see that in many examples following statement is used for SQL Server databases:
Imports System.Data.SqlClient
Which Import statement I need to use to for Oracle database. I am using a OLE DB Connection.
Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.
Hi,I would have used the aspnet membership tool to auto-create all the ASP.NET membership tables. However, the hosting company don't allow remote connections which meant I had to create the tables by hand, scripting the tables using script to CREATE using management studio.However, I noticed one of the tables has data without any users: aspnet_SchemaVersions, which causes an error when trying to log onto my site.The fix is to make sure the table has the 4-5 rows of data in it (which is missing off the live server). Its just a few rows of data, but I want to script the inserts for each row so I don't have to type them in using myLittleAdmin (the host's web version of management studio). Can anyone point me in the right direction?