SQL Server 2008 :: Merge Moves Data Into Wrong Filegroup?
Feb 18, 2015
I've got a partitioned table where I am trying to switch the first partition into a staging table, merge the first boundary and later drop the file and the file group.
This is my starting point:
boundary_idBoundary_valuePartitionNumber
11/09/2012 0:002
21/10/2012 0:003
PartitionNumberPartitionRowsFileGroupName
1799AdtLog_Archive_201208
2300AdtLog_Archive_201209
After I switch partition 1 to a staging table I run:
ALTER PARTITION SCHEME My_ps NEXT USED AdtLog_Archive_201209;
and
ALTER PARTITION FUNCTION My_pf() MERGE RANGE ('2012-09-01 00:00:00.000');
I expect the 300 records from the former 2nd partition to stay in AdtLog_Archive_201209, however I get this:
PartitionNumberPartitionRowsFileGroupName
1300AdtLog_Archive_201208
2310AdtLog_Archive_201210
How do I make sure that the data stays in AdtLog_Archive_201209 file group?
View 1 Replies
ADVERTISEMENT
Oct 14, 2015
I have a package running on Server A. I am copying data from Server B to server C. Does the data move from server B to server A for processing first and then from server A to server C?
View 5 Replies
View Related
Oct 19, 2015
I'm working on a script to merge multiple columns(30) into a single column separated by a semicolons, but I'm getting the following error below. I tried to convert to the correct value. but I'm still getting an error.
Error: "Conversion failed when converting the varchar value ';' to data type tinyint".
select
t1.Code1TypeId + ';' +
t1.Code2TypeId + ';' +
t1.Code3TypeId + ';' +
t1.Code4TypeId as CodeCombined
from Sampling.dbo.account_test t1
where t1.Code1TypeId = 20
or t1.Code2TypeId = 20
or t1.Code3TypeId = 20
or t1.Code4TypeId = 20
View 4 Replies
View Related
Mar 4, 2015
Can we create the Database with two schema and having the separate file group for each schema.
View 5 Replies
View Related
Jun 3, 2015
SQL 2008 R2
I have a partitioned table in which one of the partitions is on the Primary filegroup. I want to move the data off of that Primary filegroup, and and on to a new filegroup named RTFG6.
Scheme and function currently defined as:
CREATE PARTITION SCHEME [PS1_Left_id] AS PARTITION [PF1_Left_id] TO ([RTFG1], [RTFG2], [RTFG3], [RTFG4], [RTFG5], [PRIMARY])
CREATE PARTITION FUNCTION [PF1_Left_id](int) AS RANGE LEFT FOR VALUES (10, 15, 35, 48, 53)
I've tried split and merge, and for whatever reason, always end up with the Primary filegroup holding data.
How do i get it off of Primary completely, and onto RTFG1 to RTFG6?
I don't want to export to a holding table and re-create the table if i can avoid it, due to identity columns and relationships with multiple tables.
View 0 Replies
View Related
Apr 26, 2006
Hi there!
I've a table with a nvarchar(max) column on Filegroup2. While inserting a lot of big datas (Word documents, each 2.5MB), the primary filegroup is growing - but the Filegroup2 remains still on its creation size.
Is that a bug? By the way, i ve dropped the table and recreated on the other filegroup, after that i've restartet the SQL 2005 Service.
Someone experience?
Thanks, Torsten
View 1 Replies
View Related
May 21, 2015
I have one query which is pulling Balance sheet amounts from SAP Business One database. The query is giving the correct figures for the rest of the accounts except for the VAT Input refundable account 123600 and VAT Output Payable account 221400. The query sums up totals at Title account level(FatherNum) and the above accounts are the title accounts:
SELECT CAST(T0.TransId AS Varchar(30)) AS TransId, CASE WHEN t3.FatherNum IN ('100000', '350000') THEN '-3 OK' ELSE CAST(T0.TransType AS Varchar(30))
END AS TransType, CAST(T0.BaseRef AS VarChar(30)) AS BaseRef, T0.RefDate,T0.Number as Docnum, DATEPART(Month, T0.RefDate) AS JrnMonth, T0.FinncPriod, T1.Account, T1.Debit,
T1.Credit, T1.Debit - T1.Credit AS JrnAmt, ISNULL(T1.SYSCred, 0) AS SysCred, ISNULL(T1.SYSDeb, 0) AS SysDeb, T1.ShortName, T1.Ref1, T1.Ref2,
[code]....
View 8 Replies
View Related
Mar 12, 2015
I'm not even sure this is possible but I'm using MERGE in a process that has 3 source tables (the process steps through each source table sequentially) and I need to delete from the Target database occasionally.
My current code is
sqlMerge = "MERGE " + TableName + " AS target USING @CData AS source" +
" ON target.TotRsp = source.TotRsp AND target.ClientRef = source.ClientRef AND target.dbPatID = source.dbPatID" +
" WHEN MATCHED THEN" +
" UPDATE SET dbPatFirstName = source.dbPatFirstName, dbPatLastName = source.dbPatLastName,
[Code] ....
The Target db data is made up from several different clients and when the MERGE runs it uses TotRsp, ClientRef and dbPatID to uniquely match a source row to the target row and if no match it inserts the source row.
My problem is that when this runs with Source A first, it will delete all merged data from Source B & C. Then when Source B runs it will insert all Source B data but delete all from A & C and so on.
Is there way that that I can include additional clauses into NOT MATCHED BY SOURCE THEN so it knows only to delete when data has come from say Source A. Once the data is in the target table there is no reference to which source table it came from tho.
If there isn't a solution I suppose I could always add an extra column to the target db to indicate which source it came from and then have something like
NOT MATCHED BY SOURCE AND t.Source = 'SourceA'.
That's quite a bit of work my end to do that tho so I'd like to be sure it works.
View 1 Replies
View Related
Aug 25, 2015
SELECT
part.num, woitem.qtytarget AS woitemqty,
(SELECT LIST(wo.num, ',')
FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id
WHERE moitem.moid = mo.id) AS wonums, mo."USERID" AS mo_USERID
[Code] ...
View 5 Replies
View Related
Feb 19, 2015
I need the requirements of merging multiple rows of same ID as single row.
My Table Data:
IDLanguage1Language2Language3Language4
1001NULL JAPANESENULL NULL
1001SPANISH NULL NULL NULL
1001NULL NULL NULL ENGLISH
1001NULL NULL RUSSIAN NULL
Required Output Should be,
IDLanguage1Language2Language3Language4
1001SPANISH JAPANESERUSSIAN ENGLISH
How to achieve this output. Tried grouping but its not working also producing the same result.
View 1 Replies
View Related
Mar 4, 2015
We have a central office with a SQL2005SP4 server (yeah, I know... old as heck) that's the main database and it has multiple subscribers in regional offices. Well... one of the regional offices server is failing, and it needs to be replace.
The original server is an ancient Win2003 86x
The Server team will build a new Win2008r2 64x, and use the same name and IP address
And I'm tasked with the SQL part.
I'll be installing the same version/patch of SQL, but 64x instead, and migrate all databases, including the system databases.
How do I handle replication? Do I need to reintialize from scratch? or can I just use the backup as a starting point?
View 1 Replies
View Related
Jul 8, 2015
Merge two rows into one (conditions or Pivot?)
I have Temptable showing as
Columns:EmpName, Code, Balance
Rows1: EmpA, X, 12
Rows2: EmpA, Y, 10
I want to insert the above temp table to another table with column names defined below like this
Empname, Vacation Hours, Sicks Hours
EmpA, 12, 10
Basically if it is X it is vacation hours and if it is Y it is sick Hours. Needs a simple logic to place the apprpriate hours(Balance) to its respective columns. I'm not sure how I can achieve this in using Pivot or Conditions.
View 3 Replies
View Related
Aug 27, 2012
Can the collation used by SSIS be changed or influenced during install or run time? We have found that our databases, that use a mandatory "LATIN1_GENERAL_BIN", have incorrect SSIS Merge Join output. Changing our database collation in testing didn't make a difference. What matters is the data. Which Windows collation is SSIS using?
Example Data:
FIRSTNAME
FIRSTNAME
FIRSTS-A-NAME
FIRSTS_A_NAME
FIRST_NAME
FIRST_NAME
FIRSTname
FIRSTname
FIRS_NAME
put in a Sort task before the Merge Join task as setting advanced properties isn't enough (as described by Eric Johnson here --> [URL] ......
We are using 64-bit SQL Server 2008 R2 w/ SP1 in Windows Server 2008 R2 ENT w/ SP1.
UPDATE from ETL team: Explicitly ordering the source with "COLLATE Latin1_General_CS_AS" seems to have the same effect as using a separate sort task. We don't feel that we can rely on our findings, however, unless we have documentation that this collation is what is behind SSIS.
View 2 Replies
View Related
Sep 22, 2015
I have a merge replication. Currently works fine. Publisher & Distributor are on the same server. I need to change the location of the alternate folder for the snapshot files.
I’ll probably just change it through the GUI, but would I use sp_changedistpublisher or sp_changemergepublication if I were scripting everything?
My real concern is the subscribers. Do I have to ‘tell’ the subscribers where the alt folder has been changed to? Do I just run sp_changemergepullsubscription on the subscribers?
View 1 Replies
View Related
Nov 6, 2015
For study effect, can I configure merge replication using just one SQL Server Instance?
View 3 Replies
View Related
Feb 10, 2015
I have a table:
declare tableName table
(
uniqueid int identity(1,1),
id int,
starttime datetime2(0),
endtime datetime2(0),
parameter int
)
A stored procedure has new set of values for a given id. Sometimes the startime and endtime are the same, in which case I update the value of parameter. Sometimes I add a new time range (insert statement), and sometimes I delete a time range (delete statement).
I had a question on merge, with insert, delete and update and I got that resolved. However I have a different question regarding performance of the merge statement.
If my target table has hundreds of millions of records and I want to delete/update/insert a handful of records, will SQL server scan the entire target table? I can't have:
merge ( select * from tableName where id = 10 ) as target
using ...
and I can't have:
merge tableName as target
using [my query] as source on
source.id = target.id and
source.starttime = target.startime and
source.endtime = target.endtime
where target.id = 10
...
This means I cannot filter the set of rows in the target table to a handful of records where id = 10.
View 1 Replies
View Related
Apr 26, 2015
With merge/insert statements ...Is DISTINCT best way to handle problem of source table containing duplicate rows, along with WHERE NOT IN statement? the source dataset is large and having to do DISTINCT and further filtering is taxing on the ETL.
DDL
source table
CREATE TABLE [dbo].[source](
[Product_ID] [INT] NOT NULL,
[ProductCode] [VARCHAR](20) NULL,
[ProductName] [VARCHAR](100) NULL,
[ProductColor] [VARCHAR](20) NULL,
[code]....
View 0 Replies
View Related
Jul 28, 2015
I have few tables, which are replicated and partitioned. They also have archival process. I want to avoid having to run that same process on the subscriber.
Replication of partition switching is easy. However I am not sure how to replicate merge range and empty filegroup/file drops.
There the following article options:
Copy file group associations
Copy table partitioning schemes
Copy index partitioning schemes
I am not sure if these are enough to implement the replication of merge range and empty filegroup/file drops.
I could not find and option to copy partition functions.
View 0 Replies
View Related
Sep 11, 2015
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
View 0 Replies
View Related
Jun 20, 2013
Problem Summary: Merge Statement takes several times longer to execute than equivalent Update, Insert and Delete as separate statements. Why?
I have a relatively large table (about 35,000,000 records, approximately 13 GB uncompressed and 4 GB with page compression - including indexes). A MERGE statement pretty consistently takes two or three minutes to perform an update, insert and delete. At one extreme, updating 82 (yes 82) records took 1 minute, 45 seconds. At the other extreme, updating 100,000 records took about five minutes.When I changed the MERGE to the equivalent separate UPDATE, INSERT & DELETE statements (embedded in an explicit transaction) the entire update took only 17 seconds. The query plans for the separate UPDATE, INSERT & DELETE statements look very similar to the query plan for the combined MERGE. However, all the row count estimates for the MERGE statement are way off.
Obviously, I am going to use the separate UPDATE, INSERT & DELETE statements. The actual query plans for the four statements ( combined MERGE and the separate UPDATE, INSERT & DELETE ) are attached. SQL Code to create the source and target tables and the actual queries themselves are below. I've also included the statistics created by my test run. Nothing else was running on the server when I ran the test.
Server Configuration:
SQL Server 2008 R2 SP1, Enterprise Edition
3 x Quad-Core Xeon Processor
Max Degree of Parallelism = 8
148 GB RAM
SQL Code:
Target Table:
USE TPS;
IF OBJECT_ID('dbo.ParticipantResponse') IS NOT NULL
DROP TABLE dbo.ParticipantResponse;
[code]....
View 9 Replies
View Related
Oct 15, 2015
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).
View 0 Replies
View Related
Jul 27, 2015
I have been creating databases in SQL 2008 with a primary filegroup for the system objects and a secondary, marked Default, for the data.
We are preparing a migration to SQL 2014, and the administrator is complaining he won't adopt this structure on the new servers because 'there is no benefit' and 'a backup cannot be restored (!?)'.
View 2 Replies
View Related
Nov 12, 2014
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
View 2 Replies
View Related
Feb 19, 2006
Im trying to insert a record in my sql server 2005 express database.The following function tries that and without an error returns true.However, no data is inserted into the database...Im not sure whether my insert statement is correct: I saw other example with syntax: insert into table values(@value1,@value2)....so not sure about thatAlso, I havent defined the parameter type (eg varchar) but I reckoned that could not make the difference....Here's my code: Function CreateNewUser(ByVal UserName As String, ByVal Password As String, _ ByVal Email As String, ByVal Gender As Integer, _ ByVal FirstName As String, ByVal LastName As String, _ ByVal CellPhone As String, ByVal Street As String, _ ByVal StreetNumber As String, ByVal StreetAddon As String, _ ByVal Zipcode As String, ByVal City As String, _ ByVal Organization As String _ ) As Boolean 'returns true with success, false with failure Dim MyConnection As SqlConnection = GetConnection() Dim bResult As Boolean Dim MyCommand As New SqlCommand("INSERT INTO tblUsers(UserName,Password,Email,Gender,FirstName,LastName,CellPhone,Street,StreetNumber,StreetAddon,Zipcode,City,Organization) VALUES(@UserName,@Password,@Email,@Gender,@FirstName,@LastName,@CellPhone,@Street,@StreetNumber,@StreetAddon,@Zipcode,@City,@Organization)", MyConnection) MyCommand.Parameters.Add(New SqlParameter("@UserName", SqlDbType.NChar, UserName)) MyCommand.Parameters.Add(New SqlParameter("@Password", Password)) MyCommand.Parameters.Add(New SqlParameter("@Email", Email)) MyCommand.Parameters.Add(New SqlParameter("@Gender", Gender)) MyCommand.Parameters.Add(New SqlParameter("@FirstName", FirstName)) MyCommand.Parameters.Add(New SqlParameter("@LastName", LastName)) MyCommand.Parameters.Add(New SqlParameter("@CellPhone", CellPhone)) MyCommand.Parameters.Add(New SqlParameter("@Street", Street)) MyCommand.Parameters.Add(New SqlParameter("@StreetNumber", StreetNumber)) MyCommand.Parameters.Add(New SqlParameter("@StreetAddon", StreetAddon)) MyCommand.Parameters.Add(New SqlParameter("@Zipcode", Zipcode)) MyCommand.Parameters.Add(New SqlParameter("@City", City)) MyCommand.Parameters.Add(New SqlParameter("@Organization", Organization)) Try MyConnection.Open() MyCommand.ExecuteNonQuery() bResult = True Catch ex As Exception bResult = False Finally MyConnection.Close() End Try Return bResult End FunctionThanks!
View 1 Replies
View Related
Sep 26, 2007
Alrighty.... I'm a long time listener and a first time caller here.
I've been reading multiple topics dealing with my issue but none seem to really address what I'm doing.
We have 3 separate enviroments, Dev, QA, and Prod. Quite frequently we have a database that gets moved from our Dev Server to QA, or QA to Prod, or Prod to QA ect...
What we have been doing is when a database is moved, it holds all of the actual database logins, but when you look within the actual server logins there's nothing there (dealing w/ that specific database). So we then have to go first through all of our logins on the database write them down, then go one by one and create them on the server.
I'm wondering if there is a more simple way to be doing this to cut down on our administration time?
3/4 of our ID's on the database are all linked through our Domain using windows authentication. And since we keep all of our "application/local SQL ID's" enviromentally separate. (each ID XXX has its own ID for Dev, QA and Prod... XXXdev, XXXqa, and XXXprod) so we'll have to do those manually anyways, but I'm really hoping someone has a solution to this timely administration process!
Thanks for all of your help!
-Randy
Information Security Analyst
Securian Financial Group
St.Paul Mn,
"When you do things right, people won't be sure you've done anything at all."
Bender Bending Rodríguez - Futurama
View 12 Replies
View Related
Dec 5, 2007
Getting this error after successfully setting up merge replication scenario:
The server agent could not connect to the publisher (29049)
Well, successful all the way up to the point of subscribing that is.
The publication is configured with IIS 5 under XP. Already had a 3.1 sample publication running. The new web share is running sqlcesa35.dll and successfully loads diag page. The publication snapshot is successfully created. The snapshot agent and IUSR accounts have permission in publication and distribution dbs and are both in the PAL. I created the publication under SQL 2008 Mgt Studio. The web config did not work from within Mgt Studio, but did work when I ran wizard from Start Menu. Subscription steps all seem normal and then when attempt to sync at end of wizard get this error.
Will try again tomorrow with fresh environment, only using SQL 2005, SQLCE 3.5 and the 3.5 server tools. This is on a secondary test machine with lots of new Visual Studio stuff etc.
Reference here on MSDN says contact product support if reproducible.
http://technet.microsoft.com/en-us/library/ms172357.aspx
View 13 Replies
View Related
Oct 18, 2006
Rajiv Parekh writes "Since last few weeks on Startup of System few Databases moves to SUSPECT MODE.
This probelm is not daily but atleast 0nce in week.The day is is not fixed. Have Unistalled SQL & again Reinstalled, but the problem continues.
Please inform my Why this problem & what is the Solution."
View 3 Replies
View Related
Jan 1, 2000
When I move an SQL group to the other cluster member, it seems that the ODBC connections hang on the client web servers, and do not process any queries until they are rebooted. I'm using IIS/Cold fusion web servers with an SQL 7.0 Cluster (MSCS). Any ideas why this may be happening?
Thanks,
Steve Johnson
View 2 Replies
View Related
Sep 4, 2015
I need to create a function that replaces the data in a column with an 'X' based on the LEN of the data in the column. I created one that does a replacement, but it fills the column based on the max data length, and not the current length of the string or integer. An example of what I'm trying to accomplish.
Original data in a varchar(30) column:
thisisavalue
thisisanothervalue
thisisanothervalueagain
shortval
replaced with
xxxxxxxxxx
xxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxx
xxxxxxx
My current function is replacing the data like this:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
View 4 Replies
View Related
Oct 7, 2015
I have partitions that I have filled with data. I am not trying to figure out exactly how much data the partitions contain, and therefore I will be able to see if any of them are close to hitting their autogrow conditions. If I were looking at a single unpartitioned table, then I could maybe look at the table properties to determine data and index sizes, and compare that to the size of the mdf file size, but for partitions, then I am not sure how I would query this information out. Any pointers on how this information could be queried out of the system?
View 3 Replies
View Related
Jan 28, 2008
Hi all
I've found this problem working with a VLDB, six month ago when I install the DBMS (Win2k3 x64+sp2, SQL 2k5 x64 +sp2, 4 dual core processor and 12 GbRAM) I've got 10 disk (actually ten LUN from a Storage Area Network), each 50Gb.
I've put TempDB and Transaction Log on two separate 50 Gb disk and put the database on 8 different data file on the 8 disk; I've created each datafile with a size of 50 Gb (autogrowth disable), so my DB has 400Gb space in it's datafile.
After a while the datafile began to fill and we decide to add a couple more 50Gb disk where I decide to put to new datafile; now my db is around 430 Gb and I've got this strange situation:
The first 8 datafile now are almost full of data, and obviously they can't growth since they already occupy the whole disk.
The two additional datafile are relatively empty (about 15 Gb each).
As far as I understand now each time that SQL should write to the databse it writes only on the 2 new datafile, and I fear that this can affect performance.
I'd like to reorganize the space in order to have 10 datafile, each with 43Gb of data, but I didn't find any instruction/tool able to move data between datafile.
Anyone has any hint ?
Thank you in advance for any suggestion
Stefano
View 5 Replies
View Related
May 13, 2008
OK, I know this is out there all over and yes I did a search for this topic; however, I am confused about tables with an image data type and with moving text file group to another filegroup.
Here is what I have:
I have a table storing imaged documents and has become very large. I want to move the table to another filegroup. The table is created like this:
USE [PD51_Data]
GO
/****** Object: Table [dbo].[SCANNEDDOCUMENTS] Script Date: 05/13/2008 14:52:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[SCANNEDDOCUMENTS](
[DocID] [int] IDENTITY(1,1) NOT NULL,
[CaseID] [int] NOT NULL,
[DocName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[Doc] [image] NOT NULL,
[DocLocation] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[DocNotes] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[TopicID] [int] NULL,
[ScannedDocumentsCheckSum] [varchar](128) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,PRIMARY KEY CLUSTERED
(
[DocID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[SCANNEDDOCUMENTS] WITH NOCHECK ADD CONSTRAINT [ISCANNEDDOCUMENTS2] FOREIGN KEY([TopicID])
REFERENCES [dbo].[TOPICS] ([TopicID])
GO
ALTER TABLE [dbo].[SCANNEDDOCUMENTS] CHECK CONSTRAINT [ISCANNEDDOCUMENTS2]
On a test DB, I moved the clustered and nonclustered indexes to a secondary filegroup no problem, but it still shows to be stored in the primary filegroup. I read an article about having to create a new table in the secondary in order to move the images and text file group. Has anyone come across this?
Do I need to drop the clustered index and FK to move to a secondary filegroup?
Or
Do I create a new table into the secondary filegroup and then add the Clustered index and constraints?
View 4 Replies
View Related
May 14, 2007
Hi,
I am getting below error while importing data in SQL 2005 Express:
"error 0xc0202009: Data Flow Task: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Could not allocate space for object 'dbo.HistoryLog'.'PK_HistoryLog' in database 'HistoryData' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.".
"
I have select :
Enable Autogrownth = Yes
Filegrowth = 1 MB
Maximum File Size = Unrestricted File growth
I don't know what else I am missing?
Please help
thanks
AA
View 8 Replies
View Related