I have a procedure that generates some XML from a bunch of tables.Then i use BCP to export it to a file. This works just fine.Here's the sample code:
CREATE TABLE dbo.t_test (i INT, z VARCHAR(30) COLLATE DATABASE_DEFAULT) INSERT INTO t_test (i, z) SELECT1, 'Test' GO
[code]....
But recently, i wanted to add a test that calls this procedure. So, Test procedure uses INSERT / EXECUTE thingy to put XML data into a temp table to compare things. But then you get following error: "The FOR XML clause is not allowed in a INSERT statement."
DECLARE@T TABLE (x XML) INSERT INTO @t (x) EXECSPRC_EXPORT -- crashes here
So, i thought, fine, i'll wrap the FOR XML inside a sub-SQL:
CREATE PROCEDURE SPRC_EXPORT AS SELECT( SELECT* FROMt_test FOR XML PATH('root') )
And BCP call still works, BUT, now the file generated becomes 64kb instead of 1kb :) When i look into the file, it displays same XML, but the string is right-padded with a LOT of spaces.how BCP uses SET FMTONLY, OR that the "type" of result somehow gets changed when i do the wrapping.
hi,i am a learner of ms -sql server 2000, i had a doubt in storedproceduressuppose i have a data base having 20 tables, all the tables have acolumn named--DATEcan we write a store procdure to find out the data ---i mean i wantthe data enteredbetween two days ---- if i call the stored procedure in any one of thetable i need to get the answerpls help me how to write the stored proceduresatishkumar.g
I'm working on a web application that uses SQL Server 2000. Once I'm ready to upload my website to a production server, I need to know how to upload my SQL Server database files to the server. Are there actual files that can be uploaded? I've only deployed a website that works with MySQL to a production server, and there were no files to upload... the database had to be created on the server, and then the data itself had to be exported to text files (on my computer), then imported/uploaded to the production server using SQL statements...
i have tried to move upload of files from SQL2000 to SQl2005. in SQL2005 i have created a JOB in which has 2 steps,each one of them activates the same batch files i used in SQL2000 (where i have change the server name and so on),for example :
for %%f in ( %BULK_UPLOAD_BASE%Temp%1*.txt) do bcp "db1.dbo.Test_TargetTable" in %%f -S SERVER-REP -U n145 -h "TABLOCK" -e %BULK_UPLOAD_BASE%ErrorErr10.txt -o %BULK_UPLOAD_BASE%LogsOutput10.txt -P n145 -C RAW -f %BULK_UPLOAD_BASE%format_with_RawData_4test.fmt
when irun this code in a bat file, the data is loaded with no problem. when i run the JOB i see in the LOG that everything was ok, but actually the data fromthe files was not loaded - any idea why? thnaks in advance peleg
Israel -the best place to live in aftr heaven 9but no one wan't to go there so fast -:)
what data type am i going to put to my uploadedFiles column in my database... uploaded files are in document format or .txt also.. how can i make those files converted into pdf files.. also enable users to download it.. tnx!!! forums.asp.net = "great help"
Hi everyone, I am developing an web-database application using ASP.NET, C# and sql server 2000 . In the application i want to upload different scanned files (pdf,doc,excel,jpeg,ttf,etc) onto the database.. how can i carry out this task :-? If any one got the solution.. then, please let me know... thanks in advance
I have created a DTS package which imports text file into single sql server table with 8 columns (SourceData). The DTS package uses 'Test1.txt' file. Now i have around 200 text files (Test1,Test2,.....Test200). I need to import them one by one into 'SourceData' table. Could you pls. help me out in getting solved.
How do i Browser for files in computer, and upload them to Server? then link the file's URL(on server) into a Database.What i am wanting to do: Upload pictures (For products) for use in product pages What i think i need: Browse for images on local computerUpload the file to ServerStore the Image URL(relative to server) in DB
I have 4 different flat files types each having different no. of column, order of columns etc. I want to upload all the 4 types into the same destination table in the SQL database. Before uploading I need to apply transformation to each column in the flat files. The transformations could be like
1) Multipying the source column by 100
2) Put an if condition for 2 source columns and then select one column to be copied into the destination.
I have the flat files schema with me and also all the transformations that are required.
Question:
Can SSIS provide me with a component that can read the flat file schema and the transformations from the database and apply them to the source data and then upload it to the constant destination table? Can derived column transformation be provided with the input columns list and the transformation to be done on each dynamically?
Why I want this way?
In future there can be an addition of extra flat file formats and we want to keep the changes to the SSIS packages to he mininum. Just entereing the addiional schema and transformation details in the database should run the package on the new flat file successfully.
Im trying to upload 1000 txt files into one table in SQL. I'm using the following query, to upload one txt file at a time:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (4).txt' with (firstrow = 2, lastrow = ???, fieldterminator = ';', rowterminator = '0x0A')
I'm trying that the query skip the last row because gives me the following error:
Msg 4866, Level 16, State 1, Line 1 The bulk load failed. The column is too long in the data file for row 1, column 17. Verify that the field terminator and row terminator are specified correctly. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
know a command to skip the last row, something like lastrow= all-1...or something like that.
I also executed using MAXERRORS command...like this:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (15).txt' with (firstrow = 2, fieldterminator = ';', MAXERRORS = max_errors, rowterminator = '0x0A')
does not recognize MAXERRORS command, also tried to put a number of error instead of max_errors.
I have a situation where I want to take data from a local sql server2000 db and update a remote database. I have the sql all set, waswondering if this can be done in a timed interval with a storedprocedure on the local db.Thanks in advance for your timeDean-O
I have created a trigger to call a program that is written by our program. The program is basically read the record in the table and write to a text file, then delete the record from the table.
The trigger is a after insert trigger. After we added the trigger, we insert a record to the table. The result is that the record still and did not get deleted. Also, the text file didn't get created either. It seems that it take a long time for the record to be written to the table.
But if we just run the program (a exe file), it can write a text file in the folder and delete the record. the trigger is basically:
USE [Zinter] GO /****** Object: Trigger [dbo].[ZinterProcess] Script Date: 04/29/2014 18:34:56 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
I am trying to save an uploaded image and its associated info to sql server database using a stored procedure but keep getting trouble. When trying to save, the RowAffected always return -1. but when i debug it, I dont' see problem both from stored procedure server explore and codebehind. it looks to me every input param contains correct value(such as the uploaded image file name, contentType and etc). well, for the imgbin its input param value returns something like "byte[] imgbin={Length=516}". Below is my code, could anyone help to point out what did I do wrong? Thank you.
I need to have a script where it ask the user for a value, the script will search for all records that match the value. Then it will display the numbers of records found and ask the user to enter a different value. The rest of the script will use this new value and increment by 1 n times as the number of records found. I started the script where it will ask for "HANDLE" and display the number of records found with that "HANDLE"
declare @HANDLE as varchar(30) declare @COUNT as varchar(10) declare @STARTINV as varchar(20)
set @HANDLE = ?C --This is the parameter to search for records with this value set @STARTINV = ?C --User will input the starting invoice number SELECT COUNT as OrderCount FROM SHIPHIST where HANDLE = @HANDLE
I just can't figure out how to proceed to use the entered invoice # and increment by 1 until it reach the number of records found.
This will be the end results:
Count=5 --results from query STARTINV=00010 --Value entered by user
I am rewritting our DTS that upload tables in FoxPro to SQL Server using SSIS. I am finding the that SSIS is way slower. My uploads using DTS take just 7 minutes where doing the same thing in SSIS is taking 1 hour 15 minutes.
I'm contemplating a sql server archive strategy that rolls really old data off any sort of dbms and onto low cost media like dvds in a non relational archive format. I dont want to ever worry about these archives spanning different versions of sql when i go to retrieve a range of data that happens to span sql versions (eg one disc was sourced from 2005 another by 2008 but my report needs a union of both).
So I'm thinking about neutral/efficient formats for these archives and a live homegrown catalog that can determine exactly what disc(s) need to be mounted based on passed from and to date parameters...all so that the data that might span discs (and versions and maybe even schemas) can be merged and loaded into my sql version d'jour's "throw away" archive database for a one time report or other unplanned activity.
I remember raw data types being very convenient as an ETL format for our customers who have ssis, but wouldn't want our sqlexpress customers to be left without the archiving capability. Do the "things" that read and write raw data files really originate in some special T-SQL command that all sql editions can use, or is it strictly an ssis thing?
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores 2. Data files and logfiles are on separate partitions away from the other databases. 3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
I would like to add to it the actual file size in mb or gb of each file to the results.
select sd.name,mf.name as logical_name,mf.physical_name, case when dm.mirroring_state is null then 'No' else 'Yes' end as Mirrored from sys.sysdatabases sd JOIN sys.master_files mf on sd.dbid = mf.database_id join sys.database_mirroring dm on sd.dbid = dm.database_id
The sp_spaceused procedure does a nice job on it's own giving me what I want (only one db though), plus a bonus allocated space column. How can I combine this sp with my other query, or is there a better way to ad this information?
I have to download the files from SFTP server, for which i am using WINSCP and i am able to successfully automate the process to download the files. But it is downloading all the files instead of only current day's files. Ihave searched for WINSCP documentation for time query parameter to pass to the get command. But its not working.
My source file name contains Datefield as "Filenameexample_150120_N001.txt".
Here 150120 mean Jan 20 2015.
Winscp has no functionality to query the files to parse using datefield. how to look for files which are from today's date.
Currently i am downloading files which contain *.* (meaning all files).
I need to find all empty files in the database (SQL Server 2008R2 SP2) The files become empty after the archival of a partitioned tables.
I was trying this:
select f.name, * from sysfiles f (nolock) left join sys.filegroups fg (nolock) on f.name = fg.name left join sys.indexes i (nolock) on i.data_space_id = fg.data_space_id left join sys.all_objects o (nolock) ON i.[object_id] = o.[object_id] where i.name is NULL and o.name is NULL
Did not work.
My file names are the same as the filgroup names containing the file, this is why I was joining by name.
How can we write triggers on system table dbo.sysobjects, when i tries to write a trigger, it is giving an error that permission is denied. I even give a permission of "allow modifications to system catalogs" in Enterprise manager. still it is not giving permission. How can i create a trigger on dbo.sysobjects table?
I have about 1200 sql files in one of my folders. Almost all of these files do data inserts and updates, so they should be run only once. As and when required I have manually ran around 150 of them already. Whenever I ran any of these scripts, I log that file name into a log table in my sql server including the execution time. Since running 1000+ more files takes a lot of time, I want to automate running of these files through a batch file. But I also want to filter the files that are already run.My file list looks like follows.
select * from sqlfileexecutionlog FileNameRunTimeResult --------------------- DeleteDuplicateOrders.sql03/12/2014 14:23:45:091Success UpdateInventory.sql04/06/2014 08:44:17:176Success
Now I want to create a batch file to run the remaining files from my directory to my sql server. I also want to wrap each of these sql file executions in a transaction and log success/failure along with the runtime and filename into sqlfileexecutionlog table. As I add new sql files into this directory, I should be able to run the same batch file and execute only the sql files that have not bee run.
I have a full backup followed by transaction log every Monday, Wednesday and Friday, how can i restore this file using sql agent to automate restoration of backup files with different file-name.
How to write Aggregate functions for tables and lists as If I can write them many problems in my reports will be solved. I tried writng it in the filters but I got an error saying Aggregate functions are not allowed for tables and lists. Can any one help me in this regard?????