SQL Server 2014 :: Insert Data From PDF Files
Mar 23, 2014
We are facing a problem with loading data from .pdf files from vendor..pdf files have data in tabular format and we would like to insert those fields into a SQL table.We do not want to insert the physical location of the file but, we need to insert the data within the file.How can we read a pdf file?
View 6 Replies
ADVERTISEMENT
Jan 9, 2015
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
View 2 Replies
View Related
Feb 24, 2015
My task is to convert jpeg's to binary and then insert them into a table called "images". I need to convert/insert all jpeg files in a directory. I'm able to accomplish the task if the files are numbered. The query below works by retrieving one file at a time based on the value of @i. However, I also have directories where the files are not numbered but have ordinary text names like "Red_Sofa.jpg". I need to iterate through these directories as well and convert/insert the jpeg's. I'm running SSMS 2014 Express on 4.0 and Windows 7.
DROP TABLE images
CREATE TABLE images
(
image_name varchar(500) null
,image_data varbinary(max) null
[Code] ....
View 2 Replies
View Related
Apr 21, 2015
USE <database>
select * from sys.database_files
and
select * from sys.master_files where database_id= <db id>
give me different size of memory optimized file in <database>
Microsoft SQL Server 2014 - 12.0.2456.0 (X64)
View 1 Replies
View Related
Jan 13, 2014
I want to use service SIDs for my SQL Service accounts but also want to have the data files on a NetApp filer CIFS share. The 2014 installer prevents installation if CIFS and Service SIDs are used. I tried to install with domain account on CIFS, and then to swap back to Service SIDs afterwards, but couldn't find a way to do it.
I granted the AD Computer account Full Control to the CIFS share, so it should work, but I just can't work it out at the moment.
View 0 Replies
View Related
Feb 2, 2015
I've been trying to get a definitive answer to this question but alas I have conflicting and patchy answers so far from other sources. I have an index that, lets say, requires 10GB of data space to rebuild..This index resides on a filegroup that spans 2 files on two seperate drives (i.e. a mdf and ndf)
When I rebuild this index how will each of these datafiles grow as the rebuild proceeds to completion? Lets for the time being remove the caveats of any other activity hitting the example index/database in question.My tests seem to show that only the mdf will grows (or the file with the lowest id in the that filegroup) provided there is enough space available in that particular file to complete the operation. The secondary ndf dat file doesnt grow at all if the mdf has enough space.
Is expected behavior? i.e. the index will be rebuilt in a contiguous manner relative to the files contained with the filegroup i.e. fileid 1 will grow till limit reached then next fileid grows etc?
View 0 Replies
View Related
Aug 11, 2015
Example of data in CSV are as follows:
"XXX","0001",-990039739 ,0 ,0 ,0 ,0 ,0 ,0
"ABC"," ",-3422054702 ,0 ,481385 ,0 ,0 ,0 ,0
"JJZ","0001",0 ,0 ,0 ,0 ,0 ,0 ,0Here's my format:
12.0
10
1 SQLCHAR 0 0 """ 0 "" ""
2 SQLCHAR 0 5 "","" 1 OKCCY SQL_Latin1_General_CP1_CI_AS
[Code] ....
View 5 Replies
View Related
Oct 1, 2014
I have a Windows Server 2012 R2 2 node cluster with SQL Server 2014 FCI installed. Data files are on a separate Windows Server 2012 R2 file server. Data files share has been permissioned to the SQL Server service and SQL Server Agent service accounts as Full Control. NTFS Permissions are Full Control.
When I try to attach a database
CREATE DATABASE AdventureWorksDW2012
ON (FILENAME = 'apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf')
FOR ATTACHI get this error:
Msg 5120, Level 16, State 101, Line 4
Unable to open the physical file "apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf". Operating system error 5: "5(Access is denied.)".
If I log into the file server (called APRICOT) and look at the NTFS permissions they all look good. I have also reapplied the NTFS permissions from the root folder down.
EDIT
If I log on to one of the nodes in the cluster as the SQL Server service account and navigate to apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATA and copy and paste the data file, it works fine.
EDIT2:
If I log on to the file server and Enable Inheritance at the root level, then Replace all child objects with inheritable permission entries from this object, I get this error:
User Account Control settings on all nodes and the file server are set to Never notify
View 0 Replies
View Related
Nov 12, 2014
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
View 2 Replies
View Related
May 7, 2014
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
View 2 Replies
View Related
Jul 29, 2014
I need to load the following data into a SQL table. This is how the vendor is able to provide it to us.
CRCorp Daily Report,,,,,,
,,,,,,
Facility,Location,Purchase Order #,Vendor,Inventory #,Date Ordered,Extended Cost
09-Mowtown 495 CRST,09-402A Women's Imaging,327937,"BARD PERIPHERAL VASCULAR, INC.",113989,7/25/2014,650
09-Mowtown 495 CRST,09-402A Women's Imaging,327936,"WB MASON CO., INC.",112664,7/25/2014,8.64
01-Mowtown 499 CRST,01-302B Oncology,327947,McKesson General Medical,n/a,7/25/2014,129.02
[Code] ....
I have attempted to bulk insert it into this table with no luck.
CREATE TABLE POMaster
(Facility VARCHAR(75),
Location VARCHAR(75),
PONum INT,
VendorNm INT,
INVENTORYNUM VARCHAR(25),
orderDte DATE,
ExtendedPrice NUMERIC(10,2)
)
GO
It does not like the double quotes. How to make this format work? Do I need a format file?
View 2 Replies
View Related
May 7, 2014
I want a mechanism that search the content of all files in my upload folder, then return the address of the file that contains that keyword...
The content of the files are not in the table,just the addresses are saved in table...
View 3 Replies
View Related
Dec 1, 2014
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
View 9 Replies
View Related
May 10, 2015
I run the following:
EXECUTE dbo.DatabaseBackup
@Databases = 'F1SB',
@Directory = 'F:SqlBackup2014',
@BackupType = 'FULL',
@Compress = 'Y',
@Encrypt = 'Y',
[code]...
I cannot see the file created in the directory. The account under which sql server the agen job run have full privileges on it and is sysadmin.Then i run the Command in ssms
BACKUP DATABASE [F1SB] TO DISK = N'F:SqlBackup2014<server>F1SBFULLIGS-DB01_F1SB_FULL_20150510_214455.bak' WITH NO_CHECKSUM, COMPRESSION, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = [serverCertificate])
and I get this error message:
Msg 3013, Level 16, State 1, Line 13
BACKUP DATABASE is terminating abnormally.
View 9 Replies
View Related
May 20, 2015
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
View 9 Replies
View Related
Oct 12, 2015
how to shrink log files in SQL 2014 alwaysOn ?
View 1 Replies
View Related
Jun 25, 2014
I am actually very new to SQL databases, I have received an .MDF and .LDF for a database of size 50 GB...
I need to create or attach these files to a new database and extract some columns then convert them to .text or .csv...
View 5 Replies
View Related
Aug 15, 2014
I have set the environment set for AutoRecover (for every 3 minutes and Keep information for 7 days under the SSMS 2014 Menu: Tools -> Option ->Environment -> AutoRecover).
I've rebooted the box and restarted the SQL Server service and nothing seems to create the files.
View 4 Replies
View Related
Mar 23, 2015
What is the best method to restore a DBTest1 (with one .mdf and one .ldf) into DBTest2 (with one .mdf, multiple .ndf data files and with 4 filegroups associated with specific data files). I do not see how the one .mdf file (in DBTest1) can be separated into the other 4 filegroups (in DBTest2). This does not sounds like it is possible with Backup DBTest1/Restore to DBTEST2 or (Detach/Attach) because the underlying filegroup and file structure is different.
What method should be used to get the data and structure from DBTest1 (includes 1100 Tables and 550 GBs of Data) into DBTest2 (with 4 filegroups)? Is the following possible:
1) First, in DBTest2, execute a script to create tables/indexes on appropriate filegroups.
2) In DBTest2, use scripts to pull data from DBTest1 into DBTest2, for example INSERT INTO DBTest2.dbo.tables with SELECT FROM DBTest1.dbo.tables OR use SELECT/INTO DBTest2.dbo.tables FROM DBTest1.dbo.tables.
Or, is it possible to use the BULK INSERT or BULK COPY Options? Export/Import Wizard?
Does the Create Index step needs to be done after the data is loaded into DBTest2?
View 3 Replies
View Related
Apr 18, 2001
SQL Server 7.0 doesn't seem to support data files for bulk insert that have quoted text fields.
e.g.
" 1","Farmer","Joe","AAA","Smith John","",20001001,
I've tried using the format file to strip out the quotes. But, this doesn't seem to work.
My format file looks like this:
4.2
9
1 SQLCHAR 0 0 """ 0 dummy1
2 SQLCHAR 0 9 "","" 2 EmployeeID
3 SQLCHAR 0 35 "","" 3 LastName
4 SQLCHAR 0 35 "","" 4 FirstName
5 SQLCHAR 0 10 "","" 5 Category
6 SQLCHAR 0 35 "","" 6 Supervisor
7 SQLCHAR 0 5 ""," 7 OpCode
8 SQLCHAR 0 8 "," 8 HireDate
9 SQLCHAR 0 8 "
" 9 TermDate
Any idea on how I can bulk insert a data file where some of the fields are qutoed.
View 2 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
May 15, 2015
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
View 3 Replies
View Related
Oct 9, 2015
I have configured windows failover clustering 2012 on 4 of my test nodes.
I am trying to add another node into this cluster but its not happening. I am not even able to start the cluster service in services.msc
After installing windows failover clustering, when I go to the C:WindowsCluster folder, I am unable to find CLUSDB, CLUSDB.1.container, CLUSDB.2.container and CLUSDB.blf files in the folder.
These files are very much present on the other nodes where cluster service is running.
I tried copying these files manually to server where its missing but still no luck.
View 1 Replies
View Related
May 6, 2015
I have a requirement to
a. Read data from Different CSV files.
b. insert and update data to Data base in multiple table using joins.
This execution runs for 1-2 hours.I can use C# with Ado.net, but only concern I see is if in between execution fails due to some connection or other error. All insert data has to be cleaned up again.I feel writing and Store procedure inside transaction, which will take path's for CSV file as input and insert data in database. using transaction we will have flexibility rollback to original state.
View 9 Replies
View Related
May 6, 2008
I'm running this procedure which insert into table_name(id, name.....) select id, name.... from table_name. For some reason the tempdb data file grow up to 200GB. The tempdb is set to expand unrestricted by 10%. How can I prevent that from hapening? Thanks.
View 5 Replies
View Related
Sep 21, 2006
It seems to me that files created on Unix machines with line terminator , or chr(10), cannot be imported using the Bulk Insert statement. Is this a bug, or an oversight by Microsoft? Does this mean that unless one replaces all with
, there is no way to use Bulk Insert to import Unix files? This is a very strange behavior by MSSQL. Even lessor programs such as Excel and Word automatically recognize chr(10) as a line termination character. Am I missing something, or is this just the way MSSQL is?
View 7 Replies
View Related
Jul 22, 2015
I'm testing, with SQL 2014 on the same DB, a procedure that extracts data from a table into a file and Loads data from that file into a different table which has the same columns as the initial table (I use a function to create the create table statement from the source table and change the name of the destination table)
when doing my bcp -c the record with the special character "é" doesn't make it to the file.
when doing a bcp -w the record with the special character makes it to the file but the bulk insert omits the whole record.
The result file in the case that makes it to the file is unicode. I'm not using a format file (don't see the need for it)
The bulk insert into the destination table that contains identical columns as the source (a mixture on int, varchar, char) --> didn't work
I also tried by building the columns of the destination table with nvarchars -- Still doesn't work.
I tried the bulk insert with:
codepage = 'ACP, 'RAW' --> didn't change anything. still didn't work.
It's a complicated process that takes 1 XML record that contains information + the Create Table Statement (to eventually be able to this on a different server/DB) + the title Row for each column + the Data... Each of these are created with a BCP command (all with the same options). they are then appended to each other with a copy /B c:file1.txt + c:File2.txt + c:File3.txt + c:File4.csv c:ResultFile
Once the result file is created I bulk insert the 2 first rows in one table "TableA"
create the tmp table "TABLE B" with the create table statement that is in "TableA"
and do another bulk insert of the remainder of the file into the newly created table.
What else can I try? Should I be creating a format file? what are the benefits of a format file?
It's a very long procedure that does both Extract and Load (with 12 parameters) not sure what I should put here.
View 6 Replies
View Related
Sep 18, 2015
I have two tables for insertion in one transaction scope. Table one have 10 rows. After first table insert statement (not yet committed) if I run select on first table from other session, it holds table until my insert is committed or rolled back and from (SSMS), it display 10 rows and then wait for transaction scope till finished. My question is do I need to use no lock hint in this situation. Or there is something wrong with isolation level. One saying that in this situation table should not hols select while insert is in transaction scope.
View 5 Replies
View Related
Jun 27, 2014
I am trying to run a UNION ALL query in SQL SERVER 2014 on multiple large CSV files - the result of which i want to get into a table in SQL Server. below is the query which works in MSAccess but not on SQL Server 2014:
SELECT * INTO tbl_ALLCOMBINED FROM OPENROWSET
(
'Microsoft.JET.OLEDB.4.0' , 'Text;Database=D:DownloadsCSV;HDR=YES',
'SELECT t.*, (substring(t.[week],3,4))*1 as iYEAR,
''SPAIN'' as [sCOUNTRY], ''EURO'' as [sCHAR],
[Code] ....
What i need is:
1] to create the resultant tbl_ALLCOMBINED table
2] transform this table using PIVOT command with following transformation as shown below:
PAGEFIELD: set on Level = 'Item'
COLUMNFIELD: Sale_Week (showing 1 to 52 numbers for columns)
ROWFIELD: sCOUNTRY, sCHAR, CATEGORY, MANUFACTURER, BRAND, DESCRIPTION, EAN (in this order)
DATAFIELD: 'Sale Value with Innovation'
3] Can the transformed form show columnfields >255 columns i.e. if i want to show all KPI values in datafield?
P.S: the CSV's contain the same number of columns and datatype but the columns are >100, so i dont think it will be feasible to use a stored proc to create a table specifying that number of columns.
View 9 Replies
View Related
Jan 6, 2014
We have created a DDL trigger on SQL server 2005 database for DB audit purpose. Following is the script used for trigger creation
USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[ChangeLog](
[Code] ....
After the DDL trigger creation. Application team started reporting following error while executing a stored procedure.
*********************************
Error 1:
INSERT failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods.
Error2:
[Execute SQL Task] Error: Executing the query "exec sp_drop_indexes_EnhLeaseData delete from dbo.leases where vin_num='XXX' and lease_acct_num='XXXX' delete from dbo.leases where vin_num='XXX' and lease_acct_num='080066225' " failed with the following error: "INSERT failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
View 1 Replies
View Related
Jul 29, 2014
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
View 9 Replies
View Related
Jul 31, 2014
I am trying to add multiple records to my table (insert/select).
INSERT INTO Users
( User_id ,
Name
)
SELECT ( SELECT MAX(User_id) + 1
FROM Users
) ,
Name
But I get the error:
Violation of PRIMARY KEY constraint 'PK_Users'. Cannot insert duplicate key in object 'dbo.Users'.
But I am using the max User_id + 1, so it can't be duplicate
This would insert about 20 records.
Why the error?
View 7 Replies
View Related
Jan 2, 2015
Ok I think I will need to use a temp table for this and there is no code to share as of yet. Here is the intent.
I need to insert data into two tables (a header and detail table) the Header Table will give me lets say an order number and this order number needs to be placed on the corresponding detail lines in the detail table.
Now if I were inserting a single invoice with one or more detail lines EASY, just set @@Identity to a variable and do a second insert statement.
What is happening is I will be importing a ton of Invoice headers and inserting those into the header table. The details are already in the database across various tables and and I will do that insert based on a select with some joins. As stated I need to get the invoice number from IDENTITY of the header table for each DETAIL insert.
I am assuming the only way to do this is with a loop... Insert one header, get identity; Insert the detail table and include the IDENTITY variable, and repeat.
View 9 Replies
View Related