SQL Server 2008 :: Loading Data Into New Partition Table Online
Mar 1, 2015When you load the data into a new partition table, can it to done online without any downtime? because I have few tables that are around 250 gigs and more.
View 5 RepliesWhen you load the data into a new partition table, can it to done online without any downtime? because I have few tables that are around 250 gigs and more.
View 5 RepliesI’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
Can we create the Partition on Existing Table?e.g Create table t ( col1 number(10,0), Col2 Varchar(10)) ;After the table Creation can we alter the table to partition the table.
View 2 Replies View Relatedwe planning to create partitioning on existing tables. The partitioning is on date column, there should be one partition for each year.
Creating of new partitions should be automated, and also we dont have any plans of archiving old data, all we want is that new partition creation should be automated.
What is the syntax to verify that the partition data is loaded into the correct partition.
View 0 Replies View RelatedWhat is the syntax to verify Partition data load.
View 1 Replies View RelatedI’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
When I read the instructions linked above, I hear “Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
Where am I disconnected?
A common partitioning scenario is when the partition column has the same value for every record in the partition, as opposed to a range of values. Am I the only person who wonders why there isn't an option to automatically partition a table based on the unique values of the partition column? Instead of defining a partition function with constants, you ought to be able to just give it the column and be done. This would be particularly valuable for tables partitioned on a weekly or monthly date; when new data is added it could simply create a new partition if one doesn't already exist.
View 4 Replies View RelatedDesigning a solution for loading data into SQL destination from a single 5/10 GB flat file? If yes, what kind of performance measures you have taken while designing the solution ?
View 3 Replies View RelatedI have a simple table with 4 columns
(idAuxiliarPF(BIGINT+PK), pf(BIGINT+FK), Data(DateTime), Descr(NVARCHAR))that has aprox. 50k rows.
I need to create a partition of the data to join to another table, the query that i have:
SELECT
ROW_NUMBER() OVER (PARTITION BY pf ORDER BY Data DESC, idAuxiliarPF DESC) AS RN,
pf,
Data,
Descr
FROM dbo.PFAuxiliar
WHERE Data <= GETDATE()This query takes around 40 seconds to return the results
If i remove the Descr column, the query it takes no time.
SELECT
ROW_NUMBER() OVER (PARTITION BY pf ORDER BY Data DESC, idAuxiliarPF DESC) AS RN,
pf,
Data
FROM dbo.PFAuxiliar
WHERE Data <= GETDATE()I have two indexes, Clustered (idAuxiliarPF), NONClustered(pf).
How can i improve the performance of this query?
We have a massive database with an almost massive amount of traffic to and from it.
I've been requested to implement a sliding window partitioning with 2 partitions an active and passive 1,I managed to test this on a very small testbed last month.
I currently moved 97k table on to the partition function leaving me another 26 k to go
I'm using the following stored procedure to implement the sliding window
CREATE PROCEDURE [dbo].[ManageFactSlidingWindow](@pFunction nvarchar(max),@pSchema nvarchar(max),@FG nvarchar(max),@moveDays int)
/*****************************************************************************
PROCEDURE NAME: [ManageFactSlidingWindow]
AUTHOR: Arshad Ali
CREATED: 02/24/2013
DESCRIPTION: This stored procedure manages sliding window for the partitioned table
VERSION HISTORY:
DATE EMAIL Company DESCRIPTION
[Code] .....
When I try to move the partition even a single day I get loads of locks.
I am looking for a way to convert the following format into a sql table. The format it is Bib Tex.
Essentially a new row in the table would be for each entry, denoted by an @ logo and each column is denoted by an =, as you can see from the example data no one contains all the possible columns and some fields can be over two lines long.
To load this I was considering loading it into a table as each line being a row. Adding a row number, then a column counting the @ signs in order and essentially grouping each record, then for each group running through and looking for the column keywords 'author' , 'title' etc then splitting the data out into those constituent parts using substring and charindex.
@Book{hicks2001,
author = "von Hicks, III, Michael",
title = "Design of a Carbon Fiber Composite Grid Structure for the GLAST
Spacecraft Using a Novel Manufacturing Technique",
publisher = "Stanford Press",
year = 2001,
[Code] ....
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
I have one excel sheet contains 50 sub sheets with different names on it. Is it possible can i load all sheets into SQL using SSIS?
View 2 Replies View RelatedI am trying to look into a query thats taking 5 mins to execute on our production. Its an LinQ generated 6000 Line SQL query that the dev team is trying to put into production without any testing. They want me to add indexes to the database( brand new) so that the query will run fast. The execution plan really looks weird. So Just as a starter I tried to tune it using the DTA and I am getting the below error. I am using a .sql query file as the input.
The minimum storage space required for the selected physical design structures exceeds the default storage space selected by Database Engine Tuning Advisor. Either keep fewer physical design structures, or increase the default storage space to be larger than at least 441 MB.Use one of the following methods to increase storage space: (1) If you are using the graphical user interface, enter the required value for Define max. space for recommendations (MB) in the Advanced Options of the Tuning Options tabbed page; (2) If you are using dta.exe, specify the maximum space value for the -B argument; (3) If you are using an XML input file, specify the maximum space value for the <StorageBoundInMB> element under <Tuning Options>
My question is what value is the maximum value that I should consider adding per recommendation (2) in the above error.
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
View 2 Replies View RelatedWe have SQL cluster installed on top of windows cluster on VM environment. Node1 and Node2 under Windows Failover Cluster. SQL instance is currently on node2 the instance is up and running, but SQL Cluster service remains online pending and it restarts the instance on every 5 minutes.
SQL Browser service are running successfully.TCP/IP ports are enabled and configured.If we start the SQL server agent it is on for seconds and stopped immediately .Cluster Service is attempt to connect to the SQL service every few minutes (setting in SQL cluster resource) for the IsAlive check, if this fails then the SQL resource is restarted even if the instance was online. Hope this is what happening exactly.
[sqsrvres] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. (268435455)
00001024.00053314::2015/10/30-19:57:50.772 ERR [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] ODBC Error: [HYT00] [Microsoft][SQL Server Native Client 11.0]Login timeout expired (0)
00001024.00053314::2015/10/30-19:57:50.772 ERR [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] ODBC Error: [08001] [Microsoft][SQL Server
Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books
Online. (268435455)
00001024.00053314::2015/10/30-19:57:50.772 INFO [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] Could not connect to SQL Server (rc -1
I have an online SQL Server database provided by an ISP. I do not havepermission to create a backup device and I understand this is normalpractice.I am not using Enterprise Manager to administer the online database.I know I can back up the structure of the database using SQLscripts.My question is:How do I back up on my own machine the data contained in the onlinedatabase tables I have created? If I were using Enterprise Manager Icould do it by downloading tables using the DTS facility but how can Ido it without Enterprise Manager?Is there some work around which I have missed eg creating a csv fileof the data?Best wishes for 2005 to all those helpful people in this newsgroup!John Morgan
View 1 Replies View Related
Hi
Column with TEXT datatype is not stored in the same data row any way. I am wondering if there is any performance gain to put it in a seperate table. Thanks
I am using a WriteBack Partition to receive data from various inputs and appends any new data that I add to the WB partition.
I am able to read the data immediately in the WB partition through a Fact partition query. This is working at this point as desired.
Eventually I want to move the data from the WB partition into Fact Partition. How can I do this, manually and through automation.
I have a SSIS package running well in production however sometimes the package will fail when the excel file contains more than 25000+ rows.
The SSIS package is run by SQL Server Agent and is set to run in 32bit mode.
I checked the data by loading in batches and all data loaded successfully. But the funny thing is when I run the same package on my local development PC using BIDS and the same data file. The package loads all 25000+ rows successfully.
Is there some setting that is preventing all rows loading in the server environment.
I'm currently stuck with a table that has 350 mil records. Querying this table is insanely slow so I had a better look at existing yearly partitioning. I already managed to partition on a month level which increased the performance/querrying a lot. I did this on the staging table where I used an alter statement to split the 2015 partition by 12 months.
However, in our project we used Data Vault. This means that we have 4 tables (hub, sathub, link, satlink), all carrying 350 mil records. The problem is that altering the partition function does not work. The server cannot handle this action. What the best way is to do this, without having to drop/reload all tables.
I have a report that summarizes hospital readmissions. Some months may only have a female or male patient that is readmitted but, I want to show both months either way.
create table dbo.Scott_TEST
(
YearMonth char(9),
Gender char(1),
NumOfFemale int,
NumOfMale int
[Code] ....
MyTable has the following data (Just a sample)
parent | NAme | Checked | contactmethod|Check2 | Other
974198 | Employment | true | Face to Face | true | null
974224 | Other | true | Face to Face | true | skills
974224 | Other | true | Contact | true | skills
I'd like to pivot on "parent"
In a perfect world I'd like to see output like
974198 | Employment | true | Face to Face | true | null
974224 | Other | true | Face to Face, Collateral Contact | true | skills
If there are more than one name or contactmethod for the same parent then they would be strung along with commas
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
This value is 668 so, the row should look like
c010A92668
create table dbo.TEST
(
MeasureID varchar(10),
GroupID char(1),
Numerator float,
Denominator float
)
[Code] .....
Hi, all experts here,
Do we always have to use SCD component for the loading of data into data warehouse to handle changes of rows?
I am looking forward to hearing from you and thank you very much in advance for your help.
With best regards,
I have multiple xml data file in a directory say C:XMLData abc1.xml, abc2.xml, abc3.xml etc.
Need to loop through each file in ssis with Foreach loop container, and get the file name say abc1, and load the data of abc1.xml to abc1 table in sql server DB.
Next iteration will pick up the abc2.xml and find the abc2 table in sql server DB then insert the data in abc2 table.
While each iteration, xml source should also point each xsd file correspondingly.
Tables are already created in DB
I solved my problem up to getting the file name from ech iteration and assigned file name to variable, in oledb destination data access mode I select Table or view name variable, then corresponding table will get selected for data insertation.
Just wanted to know how can I read each xsd file for each xml data files while iteration.
WE have a job that loads data from an Oralce DB into our SQL Server 2000 DB twice a day. The schedule has just changed so that now there is a possibility of having my west coast users impacted when it runs at 5 PM PST and my east coast users impacted when it runs at 7 AM EST. As a workaround, I have developed a DTS package that loads the data into temp tables instead of the real tables. IE. Oracle -> XTable_temp instead of Oracle -> XTable. The load sometimes takes about an hour to an hour and a half to load, so this solution works great, but I want to then lock the table, delete it and rename the temp table to table X. The pseudo code would be:
Begin Transaction
Lock Table XTable
Drop XTable
Alter Table XTable_temp rename to XTable
Release Lock XTable
End Transaction
Create XTable_temp
I see two issues with this solution. 1) I think if I can lock XTable that the lock would be released when the table is dropped and the XTable_temp was being renamed. 2) I can't find a command to rename a table.
Any ideas on a process that might help?
TIA,
A
I have a XML data passed on to the stored proc in the following format, and within the stored proc I am accessing the data of xml using the nodes() method
Here is an example of what i am doing
DECLARE @Participants XML
SET @Participants = '<ArrayOfEmployees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Employees EmpID="1" EmpName="abcd" />
<Employees EmpID="2" EmpName="efgh" />
</ArrayOfEmployees >'
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID,
Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName
FROM
@Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
the above query produces the following result
EmployeeID EmployeeName
--------------- -----------
1 abcd
2 efgh
How do I join the data coming out from the above query with another table in the database, for example Employee in this case
i.e. somewhat like this (note the following query does not perform the join correctly)
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID,
Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName
FROM
@Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
INNER JOIN
Employee EMP ON EmployeeID = EMP .EmployeeID
My desired output after the join would be
EmployeeID EmployeeName Email Home Address
--------------- ----------- --------------- -----------
1 abcd abcd@abcd.com New York
2 efgh efgh@efgh.com Austin
I want to bulk insert data into a table named scd_event_tab inside a database named rdb.
When I do select * from rdb.dbo.scd_event_tab, i get :
JOB_ID RUN_ONPRIORITYPAYLOADTIMEOUT_INTERVALSTATUSPICKUP_TIMESCD_TYPESCHEDULE_IDDB_ADMIN_LOGIN_REQUIRED_YN
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert
rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv'
with
(
codepage = 'RAW',
datafiletype = 'native',
fieldterminator = ' ',
keepidentity,
keepnulls
);
go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1
Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID).
Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I have a query needs to look for 5 records data in a table. Basically i need to hardcode. Below is my query which didn't work out.
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data
from BF_DATA
WHERE (BF_ORGN_CD,BF_BDOB_CD,BF_TM_PERD_CD) in ***** i guess this is the wrong query****
('A1', 'B1', 'C1')
('A2', 'B2', 'C2')
('A3', 'B3', 'C3')
('A4', 'B4', 'C4')
('A5', 'B5', 'C5')
but if i use the query below it will generate more records than these 5 records
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data
from BF_DATA
WHERE (BF_ORGN_CD) in ('A1', 'A2', 'A3', 'A4', 'A5')
and (BF_BDOB_CD) in ('B1', 'B2', 'B3', 'B4', 'B5')
and (BF_TM_PERD_CD) in ('C1', 'C2', 'C3', 'C4', 'C5')
I want to create a XML file with data in my table. I have a question about tags.
SELECT
-- Root element attributes
'http://tempuri.org/Form.xsd' AS 'xmlns',
'http://www.w3.org/2001/XMLSchema-instance' AS 'xmlns:xsd',
(
SELECT
-- Creating a default element
[Code] ....
This is my query. When I use 'xmlns' namespace the result is below:
<Form xmlns="http://tempuri.org/Form.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance">
<Veri xmlns="">
<Satir>
<Id>1</Id>
</Satir>
</Veri>
</Form>
I dont want to see 'xmlns' in 'Veri' tag. When I use xmlna or another word instead of 'xmlns' the result is below:
<Form xmlns="http://tempuri.org/Form.xsd" xmlna:xsd="http://www.w3.org/2001/XMLSchema-instance">
<Veri>
<Satir>
<Id>1</Id>
</Satir>
</Veri>
</Form>
How can I remove 'xmlns' from 'Veri' tag while using it in 'Form' tag
I have the following two tables:
CREATE TABLE [MailBox].[Message](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[SenderId] [bigint] NOT NULL,
[Message] [nvarchar](max) NOT NULL,
[SentDate] [datetime] NOT NULL,
CONSTRAINT [PK_MailBox.Message] PRIMARY KEY CLUSTERED
[Code] ....
I'm building a messaging functionality in to my application, I'm able to insert a message into the database and this message then appears inside the other users inbox. The issue I have it when I click on this message to view the conversation I make a call to the following sp as shown here:
@UserId bigint,
@SenderId bigint
AS
BEGIN
SET NOCOUNT ON;
[Code] .....
The problem with this is I'm trying to connect to the user photos table to return their profile picture, but for some reason even though I have specified IsProfilePic I get all the photos returned, instead it should be two photos, one for the @UserId and the other for the @SenderId, its equivalent to me doing this:
Select *
From [User].[User_Photos]
where (UserId = 1 or UserId = 2) and IsProfilePic = 1
And this returns me the correct information.