Greetings,
I want to bulk load data into user defined SQL Server
tables. For this i want to disable all the constraints on all the user
defined tables.
I got solution in one of the thread and did the following:
declare @tablename varchar(30)
declare c1 cursor for select name from sysobjects where type = 'U'
open c1
fetch next from c1 into @tablename
while ( @@fetch_status <> -1 )
begin
exec ( 'alter table ' + @tablename + ' check constraint all ')
fetch next from c1 into @tablename
end
deallocate c1
go
Now when i try to truncate one of the tables (say titles) it gives me
the following error:
Cannot truncate table 'titles' because it is being referenced by a
FOREIGN KEY constraint.
Can anyone show me the right path? I am working on ASE 12.5
We have a typical issue with Column Store Index, we have a procedure which does 2 activities - Switch & Reverse Switch
Switch: 1. Fetch the Partitions needed to be switched 2. Switch the data from Main Table to Switch table 2. Disable the Column store on Switch table
SSIS Package: 3. Load data to Switch (Insert / Update)
Reverse Switch: 4. Enable the Switch 5. Switch back the data from Switch table to Main table
Issue: Some time the Column store is not getting disabled, and the package fails complaining try disabling the Column store index and try loading data.
If we re-run the procedure, the column store gets disabled.
My requirement is to sling a rowset from one place in SQL server into a table in another place in the most performant way. I want this to be parameterizable - I want to provide just a connection string and some SQL for the source and a connection string and a table name for the destination. The package should do the rest.
The solution I chose was an 2014 SSIS package with source and destination as ADO.NET connections configured from project variables. The package has a script task to bulk copy the data. For performance I disable the non-clustered indexes first.
But this performance precaution causes the bulk copy to timeout after delivering the correct rowcount to the destination table. What I can do to avoid this error?
Here's my script code:
//get hold of the source and a data reader from it SqlConnection sqlconnSource = new SqlConnection(); sqlconnSource = (SqlConnection)(Dts.Connections["source"].AcquireConnection(Dts.Transaction) as SqlConnection); SqlCommand sourcesqlCommand = new SqlCommand(SourceSQL, sqlconnSource); sourcesqlCommand.CommandTimeout = 1500;
[Code] ....
This takes 128 seconds to put 13 million thin rows into my empty destination table and then throws an exception with this message:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
i have bulk data that i dont want to have to enter manually. How can i achieve this for sql server. I want to be able to load from a text file (or any text format) with my data separated by delimeters. I know oracle has something that does this called sql loader.
Short version: The best/fastest way to load large amounts of data from a comma delimited text file into an SQL Server table. Where the text file contains date fields in ccyy/mm/dd format and the SQL Server table defines those fields as datetime data types.
Details: When I attempt to load files (using either bcp or BULK INSERT) containing datetime data the load process errors because the datetime fields in my text file are in ccyy/mm/dd format and the default format for SQL Server is mm/dd/yy. I have been unable to change the default format by using the SET DATEFORMAT statement (apparently the SET DATEFORMAT statement will not work for bcp because bcp runs outside of the SQL Server session???). The only alternatives that I have come up with are: 1) Change the format of date fields in the text file from ccyy/mm/dd to mm/dd/ccyy. 2) Create a temporary table that defines the date fields as a char(n) datatype. Then load the data into the temp table. Then SET the DATEFORMAT to ccyy/mm/dd. Then copy the temp table into the permanent table (the permanent table using datetime data types).
Both of these alternatives would require additional processing time. Since this is a process that loads large amounts of data on a monthly (soon to be weekly) basis, speed is of the essence.
I have several 2012 availability groups running on a cluster. I have one database that is bulk loaded every 30 minutes. The DB is about 1 GB in size. To be on the availability group it has to be set to full recovery mode, but simple or even bulk would obviously be better. Is there a better way to handle the transaction log size other than to run a backup after each bulk load causing extra overhead? With mirrors you could use simple, but since those are going away . . .
I have a csv file with 1.8 million records. Few of the text columns in each row has commas(,) in them and hence those columns are enclosed by " ".
An example record would look like: 123,abc,"abc, city, state",222,...
Now, the 3rd column should be read as: abc, city, state But, it is reading ("abc) into 3rd column, and (city) into 4th column and (state") into 4th column resulting in data errors.
Is there a way to specify that fields are optionally enclosed by " as we do in Oracle?
my dba folks are coming back with an answer to a question that sounds strange and I thought I would check on here.
Situation.
We are using an ETL tool I developed to move data around and building a DW on 2005. In some cases we are using the bulk loader to load and in some cases the tool itself.
One of the defaults of the tool is to truncate trailing blanks....and so fields that contain only blanks get truncated to a zero length character string.
When reloading the data with the tool it carries a null indicator next to the field value so the zero length character string is loaded as not null.
When loading with the bulk loader the dbas are telling me that the field is translated to a null. Note that they want some fields translated to null so they are using the keepnulls parameter.
On other databases (and built into the tool because this is so) the bulk loaders usually allow the specification of the load statement at column level and the specification of a 'null character sting' to be translated to null if this string is found. I put an example below.
I seem to recall that SQL Server 7 had some sort of bulk loader that allowed specification of columns at column level......for example offsets or whatever and the specification of fields to be interpreted as nulls. (Though that was a long time ago.)
I have searched through the manual and I don't see an option there any more to specify a character that will be interpreted to a null by the bulk loader.
Is it possible in 2005 to specify a character such that when the bulk loader sees it the field will be set to null?? And not just set fields to null which are not present in the load file?
(Just by the way, we are going to make the truncate trailing blanks optional and it's easy....it's just that I thought this kind of null if option was available in 2005 and I am keen to know if it is not there, either gone or never was...)
Thanks in Advance and Best Regards
Peter
Example of how Oracle does it...on page
http://www.csee.umbc.edu/help/oracle8/server.815/a67792/ch05.htm#5754 NULLIF Keyword Use the NULLIF keyword after the datatype and optional delimiter specification, followed by a condition. The condition has the same format as that specified for a WHEN clause. The column's value is set to null if the condition is true. Otherwise, the value remains unchanged. NULLIF field_condition
The NULLIF clause may refer to the column that contains it, as in the following example: COLUMN1 POSITION(11:17) CHAR NULLIF (COLUMN1 = "unknown")
I"m trying to use a BULK INSERT command to insert data into a table from a file. There is a UNIQUE Index that is being violated and the BULK INSERT fails.
I do not want to drop or disable the index, however, i also do not want to load 'duplicate' records so i keep the CHECK_CONSTRAINTS parameter.
Is there a way to have the duplicate records outputed to the ERRORFILE ?
I am trying to create table with following SQL script:
Code Snippet
create table Projects( ID smallint identity (0, 1) constraint PK_Projects primary key, Name nvarchar (255) constraint NN_Prj_Name not null, Creator nvarchar (255), CreateDate datetime );
When I execute this script I get following error message:
Error source: SQL Server Compact ADO.NET Data Provider Error message: Named Constraint is not supported for this type of constraint. [ Constraint Name = NN_Prj_Name ]
I looked in the SQL Server Books Online and saw following:
CREATE TABLE (SQL Server Compact) ... < column_constraint > ::= [ CONSTRAINT constraint_name ] { [ NULL | NOT NULL ] | [ PRIMARY KEY | UNIQUE ] | REFERENCES ref_table [ ( ref_column ) ] [ ON DELETE { CASCADE | NO ACTION } ] [ ON UPDATE { CASCADE | NO ACTION } ]
As I understand according to documentation named constraints should be supported, however error message says opposite. I can rephrase SQL script by removing named constraint.
Code Snippet
create table Projects( ID smallint identity (0, 1) constraint PK_Projects primary key, Name nvarchar (255) not null, Creator nvarchar (255), CreateDate datetime ); This script executes correctly, however I want named constraints and this does not satisfy me.
We are using SQL CE 3.5 on tablet PCs, that synchs with our host SQL 2005 Server using Microsoft Synchronization Services. On the tablets, when inserting a record, we get the following error: A duplicate value cannot be inserted into a unique index. [ Table name = refRegTitle,Constraint name = PK_refRegTitle But the only PK on this table is RegTitleID.
The table structure is: [RegTitleID] [int] IDENTITY(1,1) NOT NULL, [RegTitleNumber] [int] NOT NULL, [RegTitleDescription] [varchar](200) NOT NULL, [FacilityTypeID] [int] NOT NULL, [Active] [bit] NOT NULL,
The problem occurs when a Title Number is inserted and a record with that number already exists. There is no unique constraint on Title Number. Has anyone else experienced this?
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
Please help if you can! I have setup a job that runs every day on an hourly basis. Every morning I find that it's been disabled. The funny thing is is that the schedule is disabled, but the job is not (you see 'Enabled' in the jobs list in EM, but when you view the schedules tab, it's disabled.) Also, it runs a several times before becoming disabled.
I have a server that was being used for logshipping and had replication set up at some point as well. One of the databases got out of sync in the logshipping process so I removed logshipping and was going to reinitialize the database and set up the logshipping again. The database is in read only mode and when I try to take it out of read only I get the following message:
Error 5063: Database 'XXXXXXXX' is in warm standby. A warm stanby database is read-only. ALTER DATABASE statment failed. sp_dboption command failed.
I have tried to disable replication on the server but get the following error message:
SQL Server Enterprise Manager could not disable 'SRVXXXX' as a publisher. Error 3906: Could not run BEGIN TRANSACTION in database 'XXXXXXX' because the database is read only.
So my problem is that I can't take the database out of read-only mode because of replication and I can't disable replication because the database is in read-only mode.
Has anyone come across this before and how should I resolve it? I tried dropping the database as well and that didn't work either.
Is there a way to disable a trigger when performing a transaction besides dropping and recreating the trigger? I am trying to perform an insert on a table, and this keeps firing a trigger that I want to disable.
I have accidentally registered an existing database as a distribution database, which made it a system database. the data itself is safe and sound, but I want to undo the whole thing.
Hii have a table with primary key defined on col1 and col2. now i want tohave col3 also included in primary key. when i alter the table it givesme error for duplicate rows. there is an option for 'with nocheck' butit only works with check or foreign key constraint. is there any optionin sql server like in oracle 'no validate' which doesnt validate theexisting data and force the data validation from new records.thanxFarid*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I want to disable foreign key constraints en mass. Is there a way to do this?
I know that I can go into each table and navigate to the Relationship tab of Properties and uncheck the "Enforce relationship for INSERTs and UPDATEs" box, but I'd much prefer to automate this process with a query since there are 160+ tables and probably 200+ relationships to disable.
I figure that sysconstraints my be the ticket, but I will keep experimenting until I get the right solution. In the meantime, any insight to steer me in the right direction is appreciated.
Hi, I'm using DMO (SQLOLE65.dll) to programmatically replicate selected publications. The Publication object supports a property called Enabled, which can be set to FALSE. I'm setting the enabled property to TRUE for those publications that need be replicated, and make the others FALSE. Still, all publications get replicated. [I give DoAlter to commit the changes i made.] Any solution,Please mail me ASAP to venkateswaranb@synectics.soft.net
In the process of doing some routine monitoring/clean-up we'vediscovered that several (many?) users are apparently set to access ourSQL Server 2000 database instances via the Named Pipes protocol. Inreadings and recommendations we've decided that our WAN would be bestserved if we use the less "chatty" TCP/IP.As such we've also decided to try to enforce this decision to useTCP/IP exclusively using the domain login script used by all of ourend-users.Question: does anyone know what registry entries are created/used toindicate that TCP/IP is enabled and is the default protocol for SQLServer 2000? Our environment is: XP Pro SP2 and SQL Server 2000(typically SP3).TIAGlenn - newbie DBA
Here is my situation. I need to disable a task at runtime. I have a script task that configures a boolean variable at runtime and sets its value to either true or false based on a condition. And also i have already set "disable" property of the component to get value from the boolean variable. The problem here is that the component gets the default value which we give during variable creation instead of getting the configured value.
I'm currently using SQL Server 2000 and SSRS 2000 with the latest Service Packs.
I need to disable Excel Exports for a single report.
I've found a way to disable Exporting Formats but it disables for ALL REPORTS on the server. This involves changing the rsreportserver.config file. http://blogs.digineer.com/blogs/jasons/archive/2006/05/10/93.aspx
Another site mentions a way to disable Exporting Formats for a single Report. http://mikemason.ca/2007/04/30/
Code Snippetusing System.Reflection; using Microsoft.Reporting.WebForms; using Microsoft.SqlServer.ReportingServices2005.Execution;
namespace MyProject { public class ServerReportDecorator { private readonly ServerReport serverReport;
public ServerReportDecorator(ReportViewer reportViewer) { this.serverReport = reportViewer.ServerReport; }
public ServerReportDecorator(ServerReport serverReport) { this.serverReport = serverReport; }
I'm looking for further clarification of this process. 1. Where I can download the Microsoft.SqlServer.ReportingServices2000 dll? 2. Does anyone have a project example of the process? How/When to actually use the code quoted?
I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert