Data Access :: Why Bcp Out Is Fast
Jun 10, 2015why bcp out (exporting data to a text file from a sql table using bcp utility ) is faster ?
View 6 Replieswhy bcp out (exporting data to a text file from a sql table using bcp utility ) is faster ?
View 6 RepliesI have a dashboard page that I would like to load fast. But I would also like the user to be able to run the report On-Demand. The report has start date and end date parameters which default to the first and last day of last month. If I use a Cache plan that loads at 6AM the report runs with the default parameters set in the report, the user has to wait until it completes before they can change the parameters.
If I uncheck "use default" parameters in the report I lose the ability to set a cache refresh plan with the default parameters because the option is greyed out. So is there anyway outside of creating a second copy of the report to allow a user to open a report, change the parameters and run it, but also have a way to schedule some sort of Cached report for the default parameters so I can put that on a WebPart page that would load instantly?
Hello everybody
We need to move table T1 from database A to T1 database B on same server
size of table T1 15 GB and 40000000 rows
database B just created and will act as warehouse
could it be done simply by
1.creating table T1 on db B and then
2.set db to simple recovery
3.
insert into B.dbo.T1
select * from A.dbo.T1
4. create all the indexes on table T1 in db B
free disk space is 35GB
Any idea how to optimze import
Thank you
I am searching for a way to fast load relation data. I know how to load data fast but how can i store relation data fast.
For example :
Table1 ( tabel1Id int identity , name varchar(255) )
Table2 ( tabel2Id int identity , table2Id , name varchar(255))
When i insert 50 records into Table1 i can't get the 50 identity fields back, to insert the related data into Table2.
I think one of the solutions could be returning a selection of
Table1 joined with syslockinfo, but i have no idea how to do it.
Does anyone have an idea?
Hi Every one,
How can I load or copy say millions of rows to a table in the database faster?
Thanks,
Mejo George
I'm currently working with a 10 million plus row database with the dataresiding on a Unix box with Cache 5.0. The problems is that it can take fivedays to pull one table from Cache to SQL 2000 using the ODBC connectionprovided by Cache in a SQL 2000 DTS package. I think the real problem isconverting the data from the post relational format (Cache) to a relationalformat (SQL 2000)???Does anyone have any ideas / suggestions on how to speed this transfer ofdata? I'm very new to Cache and any help would be greatly appreciated.Thanks,-p
View 3 Replies View RelatedHi,
For this scenario, what is the best method of exporting data to sql 2005.
I want to export data from desktop app across internet to sql which can do on a row by row basis, but this is very slow and if the connection goes down halfway then pretty much buggered.
What is the best, reliable and fastest way to copy data across internet (several thousand rows), I have read about Bulk Insert etc... but also how would get around an upload and crashes half way, is there a way of uploading and until the whole upload goes through then the data is inserted into the database.
Would appreciate any guidance.
Richard
Does anyone know how to upload (bulk) data from a client (written in Excel VBA) to a remote SQL2000 database? Of coarse I tried "INSERT INTO" and rst.addnew but I noticed this is much, much slower as downloading from the same remote database.
Thanks.
I have below DB structure in MSSQL for a small application which follow relational approach. Data retrieval (for Hostels) will need several Join, may be Key-Value approach where data retrieval will be fast.
Hostels
------------
HostelId,
Name,
Address,
CategotyId,
SubCategoryId,
FoodCategoryId,
LandLordId
Data:
1 H1 Address1 1 1 2 20
2 H2 Address2 1 2 2 21
3 H3 Address3 2 2 1 17
Category
----------
CategoryId,
CategoryName
[code]...
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
When I read the instructions linked above, I hear “Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
Where am I disconnected?
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
I'm trying to re-write my database to de-couple the interface (MS Access) from the SQL Backend. As a result, I'm going to write a number of Stored Procedures to replace the MS Access code. My first attempt worked on a small sample, however, trying to move this on to a real table hasn't worked (I've amended the SP and code to try and get it to work on 2 fields, rather than the full 20 plus).It works in SQL Management console (supply a Client ID, it returns all the client details), but does not return anything (recordset closed) when trying to access via VBA code.The Stored procedure is:-
USE [VMSProd]
GO
/****** Object: StoredProcedure [Clients].[vms_Get_Specified_Client] Script Date: 22/09/2015 16:29:59 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
[code]....
I have a client who has SSMS installed on her laptop. She is able to connect to the SQL server via SSMS in the office and query data on the server.
She needs to be out of site often and doesn't have internet access. She asks if the data tables can be "backed up" or saved on her laptop, so she can look at them without worrying connecting to the server. I am not sure if this can be achieved, as SSMS is built for accessing a server, not a desktop. Myself never have this need. If I really need it, I would go to Microsoft Access and create an ODBC connection to the datatables. But this client thinks that Microsoft Access is beneath her.
HowTo: Import data to MS SQL 2008 from password protected Access DB ?
View 2 Replies View RelatedI have recently upgraded to SQL2014 on Win2012. The Access front end program works fine.
But, previously created Excel reports with built in MS Queries now fail with the above error for users with MS 2013. The queries still work for users still using MS 2007.
I also cannot create any new queries and get the same error message. If I log on as myself on the domain to another PC with 2007 installed it works fine, so I don't think it is anything to do with AD groups or permissions.
We need to insert data/rows from a SQL Server 2014 database into MS Access database. The problem is, there are so many columns (100+) in the table and there are so many insert transactions of this kind (from different tables) that it is not very easy to write the code in VB.NET that lists all column names.
Both the Access and SQL Server tables have the same number of columns and the equivalent data types, so inserting is not really the problem. It's just that is there a way to do an insert statement in T-SQL that does not name all the columns?
Hello.
View 5 Replies View RelatedHi guys,
I've been developing desktop client-server and web apps and have used Access and SQL Server Standard most of the time.
I'm looking into using SQL CE, and had a few questions that I can't seem to get a clear picture on:
- The documentation for CE says that it supports 256 simultaneous connections and offers the Isolation levels, Transactions, Locking, etc with a 4GB DB. But most people say that CE is strictly a single-user DB and should not be used as a DB Server.
Could CE be extended for use as a multi-user DB Server by creating a custom server such as a .NET Remoting Server hosted through a Windows Service (or any other custom host) on a machine whereby the CE DB would run in-process with this server on the machine which would then be accessed by multiple users from multiple machines??
Clients PCs -> Server PC hosting Remoting Service -> ADO.NET -> SQL CE
- and further more can we use Enterprise Services (Serviced Components) to connect to SQL CE and further extend this model to offer a pure high-quality DB Server?
Clients PCs -> Server PC hosting Remoting Service -> Enterprise Services -> ADO.NET -> SQL CE
Seems quite doable to me, but I may be wrong..please let me know either ways
Thanks,
CP
When i am trying to start our hospital software based on SQL server 2000, it shows Following Error.Search Condition is not valid, (DBNETLIB) Connection Open (connect()). SQL server does not exist or excess denied. Due to Fetch data.I run our software in Windows 8.1, while it smothly runs in previous version of Windows XP and 7.
View 2 Replies View Relatedmy system has 2 db's - sql server 2000 & db2 @ separate locations. i have a select query which needs 2 pick up consolidated data from both the tables. also the schema on the db2 has minor changes when compared with the schema on sql server 2000.
while searching on microsoft i came across the technique of creating a linked server. would this be possible 2 implement in my scenario. also would in this case, be advised that i create another view in the db2 server which has changed the db2 schema to the sql server schema format??
please hurry..
regards,
sameer
One or more files listed in the statement could not be found or could not be initialized. (Microsoft SQL Server, Error: 5009)
I accidentaly created a log file on my drive E:, but every time that I try to delete the log file it keeps on returning the same error.
Can someone please help me delete the log file.
hii have over million records in my DB, what is the best way to get the results fast in case i need to get details of an employe name say "robert", if i do it normally it will take long, should i use index or is there any other good way.thanx in advancecheers
View 1 Replies View RelatedHello everybody .
I have 40 GB db running mostly transaction processing.
I set up
1. back full backup 2 times a day (takes 30 -40 min)
2. log backup every 15 min
3. custom log shipping
4. We don't won't use Cluster.
Once in while becouse of nethwork, or other problem log shipping fails,
so I have to restart log shipping all over starting from restore in stand by mode last full back of my db.IT takes 2-3 hrs just to do this restore !!!
1. So I am asking advice is any way I can bring down time for restore ?
2. Should diffrential backup be taken ?
3. We will not use Custer
Alex
hello all !for MS SQL 2000i am having a table with > 100 000 rowsI must clean itDELETE FROM myTable WHERE Name LIKE 'aser%' AND info IS NULLDELETE FROM myTable WHERE Name LIKE 'tuyi%' AND Info = 'ok'DELETE FROM myTable WHERE Name LIKE 'hop%' AND info LIKE 'retro%'.....about 20 DELETE commandswhat is the best way to do it ?thank you
View 14 Replies View RelatedHi everyone im in deep in need of help in a very easy query and few questions i want to ask,, i use msn boy22202@hotmail.com please i want to contact anyone who use sql server 2005 that can help me in it.... thank you
View 4 Replies View RelatedI need to insert data to a temp table in SQL ,
I have
CREATE TABLE TMP_X (
doc_name varchar(200)
)
--select * from TMP_X
INSERT into TMP_X
values
(
'...,
but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column..
is there another INSERT type function ?
If I have a table with one column
and i want to insert a few 100's rows of names
I can't use the INSERT stmt as that does one row at a time ,
how can i achieve this ?
I have stoopidly enough deleted default Db. That causes Enterprise Manager to be unable to work with my DB's. The default DB I deleted had no other functions other then being default DB, I mean it was outdated, and I had other DB's that contained all my importent work. They are still running, and I can view DB driven site hosted at localhost, even though default DB no longer excist. I am even able to upload new content, or add new users, so this means all my other DB's are fine. I can even see SQL server icon in my bottom right corner of my desktop, and it shows server running.
Now I am in the need of adding tables and rework some of my excisting tables and stored procedures, but I am not able to do that with Enterprise Manager, due to the lack of default Database.
How do I correct this problem? I have gotten one tip of doing the following: EXEC sp_defaultdb 'User', 'DB' but I am not sure what to do with this.....tried to run it from command line, and put my username and the DB I would set to default but nothing happend.
So I need more details, step-by-step guiding will work, as I don't know a hole lot about Enterprise Manager and SQL.
Btw, this is my error in Enterpr.Managr:
A connection could not be established to MyComputerVSDOTNET2003
Reason: Cannot open default database. Login failed..
Please verify SQL server is running and check your SQL server registration prpoerties and try again
Pls tell me there is a way to fix this problem.
Hello everybody,
please advice: what is the fastest standard method of user interface access to SQL database? I am looking for fast display of one master record plus related dependent records, plus fast scrolling through master records with display of dependent records as fast as posible. Perhaps a standard problem with standard solution? At current state of matters, I am still much slower then with my old Access97 database.
thanks for any advice,
Otakar Kverka
Prague
I notice this morning that my tempdb grows very fast. I have 26GB in my
hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb.
The select query supposed to bring about 40000 rows.
I ran this same query in different server that is not growing even 1 mb.
I checked the tempdb option the Trunc log on checkpoint is true.
Why this problem happening ?.
I have just dbo permission to access all the database.
Do you have any advice regarding this?.
Thanks,
Ravi
Hello, everyone:
My database backup files are 3-5GB. Restoring always take over 20 minutes. Is there the fast way to restore the big database?
Thanks
ZYT
(Win2003, SQL Server 2000 SP4)
I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.
After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.
I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.
Any thoughts? How can I prevent this behaviour?
Thank a lot in advanced.
I have a weird situation I had not expected.....
I insert a record to a table and "later" I update it.
I have two fields to capture time information: Created and LastModified.
My update is very simple: update .... set ..,[LastModifiedDate] = GetDate() where id = @pId.
Now my problem is that I am seeing the created and lastmodified times as the same (in format 2007-09-05 12:38:42.383) !!??!
The record has definitely been updated (other fields are populated).
Can anybody enlighten me?