I need to create a text file using information from SQL tables/views in the following format...Can anyone recommend a direction or procedure to look into, i.e, sql script, custom dts, etc. The items in parentheses identify specific portions of the text file.
I am having a problem with a DTS package I am writing. The package is getting text from a table and putting it out to a text file. I have most of it sorted, and contains text like this: PN,GEN,T_KI-3,2006-01-24 00:30,480,2006-01-24 01:00,480 PN,GEN,T_KI-2,2006-01-24 00:40,484,2006-01-24 02:00,482 PN,GEN,T_KI-3,2006-01-24 00:50,490,2006-01-24 05:00,486 What I need is a line at the end of the text which is: <EOF>
So the output from this will look like: PN,GEN,T_KI-3,2006-01-24 00:30,480,2006-01-24 01:00,480 PN,GEN,T_KI-2,2006-01-24 00:40,484,2006-01-24 02:00,482 PN,GEN,T_KI-3,2006-01-24 00:50,490,2006-01-24 05:00,486 <EOF>
I am having problems adding the line at the end of the text file. Can anyone advice me of a good solution or show me where I amn going wrong?
Hi Everyone, I´m new to this group, I´m trying to write a text filesadding content from a text column (more than 8000 characters), I foundcode how to write files and it works but i have the problem when addedthe text column to the body of the file.any idea? tip? thanks in advance! Pablo.
I have a Foreach loop which scans a table, and gets names of a bunch of procedures, and then back in the foreach loop, they get executed. Im trying to figure out how I can create a sort of log file to say the name of the procedure that is getting executed currently and the current date time stamp onto a flat file. I havent been able to figure this out yet..anyone know how to do this? I grab the names of the storedprocedures from the table and store it in a variable and use the name from the variable to actually execute the stored procedure.
I guess in essence, the question is how do i directly write lines of 'text' (from say a variable) into a flat file.
I am trying to create a text file from multiple SQL Tables using @BCP_Command. I tried using DTS and SQL but the number of columns in the tables have to be the same size when doing a union. I also not to place a delimeter between each column. I've learned how to use BCP_commands on one file not sure if you can make it work with two or more.
Want to write from a table variable to a text file from a stored procedure.Read about xp_cmdshell bcp etc. but worried because it's supposed to be a security problem and needs to be from a permanent database.Also am getting error "The EXECUTE permission was denied on the object 'xp_cmdshell'..."
1. Is xp_cmdshell a bad idea to use even if I get permissions ? 2. Can a "permanent" table be used in a stored procedure starting out fresh each time with 0 rows rather than use a table variable ?
I need to just back one table and not all the data in the table. Is there away to have SQL save the data return from a query to some text file that Ican then use to build the table in another table on another server?I am SQL server 7Thanks,S
I am trying to export data from a query in SQL Server 2005 SSIS to a flat file destination. Everything works fine except the rows returned from my query are written to the flat file in one long string (i.e., without line breaks). I have tried appending a new line character to the rows returned from the query but that only throws an error when the package is executed. My rows returned from the query are 133 characters wide (essentially only one column per row) so I have set the properties accordingly for a fixed width file format with 133 character wide rows.
Any suggestions or ideas on how to correct this would be greatly appreciated.
I'm creating a DB to track clients, programs, and client participation in the programs. They are service programs. A client can be in more than one program and a program can have more than one client. Can someone give me an example of how they would layout the tables? My guess is: tblClient, ClientID tblClientProgramLog, ProLogID, ClientID tblProgramDetails, ProDetailID, ProLogID tblPrograms, ProgramID, ProDetailID I appreciate any suggestions,
I'm trying to design a database that handles Clients, Cases, Individual and Group Sessions. My problem is that a client can have individual sessions and belong to more than one group at the same time, so I have a many-to-many relationship to deal with. Also I'm trying to design it so that I can have a form that when a group is selected from a drop down it shows all clients assigned to that group and will let me enter new session data for them.
Just looking for some advice on how to handle the relationships. Maybe someone could show me how they see the relationships working.
My take is that the session is linked to the case not the client, I could be thinking incorrectly?
I have an construction estimation system, and I want to develop a project management system. I will be using the same database because there are shared tables. My question is this, critical data tables are considered tables with dollar values and these tables should not be shared across the whole company. I do however need information from these tables, such as product and quantity of the product for a given project. When an estimate becomes a project it is assigned a project number. At this point I thought of Copying the required data from the estimate side to the project side. This would result in duplicate data in a sence but the tables will be referenced from two standalone front end applications. Should I copy the data from one table to another, or create new "views" to the estimate tables for the project management portion.
What would be the best solution to this problem? I find in some circumstances, a new table is required because additional data will be saved on the "Project Management" side, but not in all cases.
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER.
I am looking for some feedback on the design of my database. One thing I need to improve is mentioned in (1)
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
Hello Everyone. Im sorry for this urgent post, but have critical issue that needs a solution quick. So for my issue. I am adjusting our sales order tables to handle a couple different scenarios. Currently we have 2 tables for sales orders
SALESORDERS ------------ SORDERNBR int PK, { Addtl Header Columns... }
SALESORDERDETAILS ------------------- SODETAILID int, SORDERNBR int FK, PN varchar, SN varchar(25), { Addtl Detail Columns ... }
Currently the sales order line item is serial number specific. I need to change the tables to be able to handle different requests like :
Line Item Request ( PN, QTY ) Line Item Request ( SN ) Line Item Request ( PN, GRADE, QTY ) ETC.
I am thinking i need to create a new table to hold the specifics for a particular line item. Maybe like this :
SALESORDERSPECS ---------------- SOSPECID int, SODETAILID int FK, SPECTYPE varchar, IE : SN, PN, GRADE. { one value per row } SPECVALUE varchar IE : GRADE A
Im thinking i would need to rename the SALESORDERDETAILS table to SALESORDERITEMS. SALESORDERITEMS would just contain header info like SalePrice, Warranty, Etc...
Then rename SALESORDERSPECS to SALESORDERDETAILS...
Anyone understand what im trying to do? If you need more info please ask. You can also get a hold of me through IM.
Hi All, I have read MANY posts on how to track changes to data overtimeIt appears there are two points of view1. Each record supports a Change Indicator flag toindicate the current record(would this be EVERY table?)2. Each table is duplicated as an archive table andtriggers are used to update archiveCan someone give me some guidance based on REAL world experiencewhich works best for them?My scenario - I have insurance policies and must track history aspolicies are updated by customer service reps.Imagine many tables Policy>LifePol>LifePolRiders[color=blue]>AccidentPol >etc...>DIPol>DIPolRiders[/color]To me the archive table scenario does not seem scalable at all....someguidance on design would be aprreciated...Thanks!!!
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER
I am looking for some feedback on the design of my database. One thing I think need to improve is mentioned in (1)
But any advices to improve it would be great.
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
I have a schema that is mostly working, but I was wondering if some of you with more experience than I might give me some constructive criticism on my methodology.
Basically, I have a single table that stores data for many records. Each record has a variable number of fields, each of which can be a different data type. Later, queries will pull filtered subsets of data from the table, and do calculations on specific fields. In my implementation, the fields for a record are bound together by the datagroup (uniqueidentifier) column in the LotsOData table, the field name is defined by the dataname column, and the field value is stored in the datavalue column, which is type sql_variant.
One problem I had, and I'm not able to reliably replicate, is that the more complicated queries sometimes raise casting errors on the sql_variant column, even when the data is absolutely correct. I've been able to avoid this case by pre-selecting some of the subqueries into temporary tables first, then joining on the temp tables in the main query, but that seems horribly inefficient.
I've included a sample table, data, and query to demonstrate my basic solution. I was wondering if anybody could provide some insight on a better way of designing a solution for this scenario.
Thanks! -Eric.
PS: bonus points if you have any insight at all on the casting error I mentioned!!
-- create tablecreate table LotsOData( pk int identity, dataname nvarchar(16) not null, datagroup uniqueidentifier, datavalue sql_variant);-- lot of insertsdeclare @group_a uniqueidentifier, @group_b uniqueidentifier, @group_c uniqueidentifier;set @group_a = newid();set @group_b = newid();set @group_c = newid();insert into LotsOData (dataname, datagroup, datavalue)select 'some_int', @group_a, 1union all select 'some_int', @group_b, 2union all select 'some_int', @group_c, 3insert into LotsOData (dataname, datagroup, datavalue)select 'some_char', @group_a, 'a'union all select 'some_char', @group_b, 'b'union all select 'some_char', @group_c, 'c'insert into LotsOData (dataname, datagroup, datavalue)select 'some_string', @group_a, 'abc'union all select 'some_string', @group_b, '!@#'union all select 'some_string', @group_c, 'xyz'insert into LotsOData (dataname, datagroup, datavalue)select 'some_float', @group_a, 1.23union all select 'some_float', @group_b, 2.34union all select 'some_float', @group_c, 3.45insert into LotsOData (dataname, datagroup, datavalue)select 'some_datetime', @group_a, cast('01/01/2001 01:00:00' as datetime)union all select 'some_datetime', @group_b, getdate()union all select 'some_datetime', @group_c, cast('01/01/2009 01:00:00' as datetime)-- do some big ugly query:select cast(a.datavalue as datetime) as datatime_data, cast(b.datavalue as int) as int_data, cast(c.datavalue as char(1)) as char_data, cast(d.datavalue as nvarchar(max)) as string_data, cast(e.datavalue as float) as float_data, cast(b.datavalue as int) * cast(e.datavalue as float) as calc_datafrom ( select datavalue, datagroup from LotsOData where dataname = 'some_datetime' ) a inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_int' ) b on b.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_char' ) c on c.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_string' ) d on d.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_float' ) e on e.datagroup = a.datagroupwhere cast(a.datavalue as datetime) between '01/01/2006' and '01/01/2008';
Any design suggestions for the best way to architect this report using SQL Reporting Services 2005 are appreciated!
My website features a catalog of roughly 50,000 items, each of which may be appear in a list of search results or in a detailed view. There are counters on the pages that update totals for such appearances and track other item-specific information in several tables in a SQL database. The catalog of items changes frequently, so the list of item IDs is never exactly the same from month to month.
I've been asked to produce a monthly report of this data for each of the items in the catalog, with reports for the current and previous months (for many years) accessible at all times. Some -- but not all -- items are useful for one purpose or another and so can be considered as belonging to a group of items. Although I have not yet been asked to create a report that aggregates the values for all Group members into a single report for that Group, I can clearly see it would be valuable and will be requested soon.
To ensure the report captures the data for an entire month, it must be run at the very end of each month. That means I will need to run the report using a Schedule that kicks off the process at 12:01am every 1st of the month. The report must be processed and stored for later retrieval and rendering on demand.
Considering the number of items and the indefinite length of time the report data must be retained, my question is really what's the best way to set all this up?
Should I create a report for each item separately? That would mean the scheduled task would have to somehow discover the current list of item IDs (which is available via query from the database) and create and process (but not render) a report for each (passing the item ID as a report parameter?), adding it to the report history. Although each report would be small take only a short time to run, overall that seems like it would take a long time to run and create a huge number of reports to store each month.
Or should I create a single 'master' report that contains all the data for every item for the month, and then use the item ID as a filter on the data when it is rendered? While that means only one report is created each month and added to the history, it would be a much larger report and take much longer to run (with more potential for timeouts and errors to scuttle the whole report). It also means all the data for the entire report has to be loaded every time the report is rendered, even though only 1/50,000 of the data (the data for 1 of the 50,000 items) will actually be viewed with any given rendering. But that would seem overly cumbersome, slow, and wastefully band-width intensive.
Any alternatives, suggestions, considerations, etc. -- all welcome!
Hi,I have two tables Table A and B, below with some dummy data...Table A (contains specific unique settings that can be requested)Id, SettingName1, weight2, lengthTable B (contains the setting values, here 3 values relate to weightand 1 to length)Id, Brand, SettingValue1, A, 1001, B, 2001, null, 3002, null, 5.3(There is also a list of Brands available in another table). No primarykeys / referential integrity has been setup yet.Basically depending upon the Brand requested a different setting valuewill be present. If a particular brand is not present (signified by anull in the Brand column in table B), then a default value will beused.Therefore if I request the weight and pass through a Brand of A, I willget 100If I request the weight but do not pass through a brand (i.e. null) Iwill get 300.My question is, what kind of integrity can I apply to avoid the userspecifying duplicate Ids and Brands in table B. I cannot apply acomposite key on these two fields as a null is present. Table B willprobably contain about 50 rows and probably 10 of them will be brandspecific. The reason its done like this is in the calling client code Iwant to call some function e.g.getsetting(weight) .... result = 300Or if it is brand specificgetsetting(weight,A) ..... result = 100Any advice on integrity or table restructuring would be greatlyappreciated. Its sql 2000 sp3.Thanksbrad
I have recently been task with rewriting a database that holds large volumes of data, whilst ensuring that query can be run in optimal time. Having never really delved into this sort of thing before, I hoped you guys might be able to offer some advice and guidance.
The design I have inherited is based around 2 main tables:
[captured_varbinds] [id] [int] IDENTITY (1, 1) NOT NULL [captured_trap_id] [int] NOT NULL [varbind_oid] [varchar] (500) [varbind_text] [varchar (500)
The relationship between the two tables is on the "captured_traps (id)" to "captured_varbinds (captured_trap_id)". Currently the "captured_traps" table contains around 350 million rows, the "captured_varbinds" table contains around 900 million rows.
Now as you can probably gather this model runs like a....well it sort of hobbles more than runs hence the need to redesign.
My current thoughts on this are:
- Normalising all varchars - there is alot of duplicate values in most of the varchar fields. - Full Text Indexing
However beyond that I am not sure which route to go down. After googling for most of today I have come across a number of "solutions" however I do not want to go steaming down the track of one of these to discover that it is fatally flawed somewhere.
Hi, I am trying to write the output of an sql query to a text file using bcp. Now the problem am facing is i have to format the text like i the following order... <DATETIMESTAMP>08:39 Thursday, March 15, 2007</DATETIMESTAMP><CATEGORY>Quarterly Sales</CATEGORY><HEADLINE>Quarterly Corporate Earnings (03/15/07)</HEADLINE><BODY> Quarterly Corporate Profits (03/15/07) PER-SHARE ($) NET EARNINGS (mil$) REV. (mil$)COMPANY CURRENT YEAR-AGO CURRENT YEAR-AGO CURRENT YEAR-AGOabc 0.33 n/a 8.60 (13.30) 144.40 112.50 bcdf 0.16 (0.15) 7.80 (7.40) 101.90 81.20 gha n/a n/a (90.00) (80.00) 488.00 462.00 qwqw (0.10) (0.15) (3.30) (4.30) 2.90 2.20 Copyright(c) 2007 mycompany.com, Inc. All Rights Reserved</BODY> Would somebody please advice whther this sort of formatting is possible with bcp..Please give some pointers am completely stuck....the company data is coming from a database table...am using sql 2005...
Hello, historically it has been frowned upon to store textual data in sqlserver. In v6.5 and before, 2k pages were allocated regardless of actual data size. Is this still the case in v7.0? I have a situation where the intranet application wants to store documents/images/etc. Currently, we need to have security granted to each user who connects to the web server to allow them to write files. However, if we write the data via sqlserver, then the application only needs permission. Is this the recommended method? What other options can I use? tia, Lee
I'm sure I'm not the only one frustrated trying to figure out which log file SSRS writes to when an error occurs. Does anybody know a sure way to tell? It doesn't change the date/time of the date modified, so you can't sort by date in windows explorer. There seems to be no rhyme or reason to which file it writes to. For example, I had to go through 6 log files to find which one was being written to. I had thought (hoped) that it would always write to the log files with the latest embedded date/time stamp as part of the file name. It does not.
I'm tempted to start using a file system spy, or resort to other tactics. Does anyone have a sure fired way to see which log file is written to when an error occurs? I'm not talking about SQL Dump files - just "normal" errors when processing reports.
I am having a scenario where all the stored procedures are stored in a folder (one sql file per sproc). Stored procedure does not have 'IF Exists .... DROP Procedure' in the script so before creating them we have to drop all sproc manually.
Can anyone help me writing a script / SSIS process to loop through each file in folder and write "IF EXISTS ... DROP PROCEDURE" with the procedure name in it ??
I can create a package that loop through each file in FOR each Loop task but dont know how to write in file using .net
Alex writes "Windows Server 2003 Enterprise Version - SQL server 2000 SQL Enterprise Mgr Version 8.0
I am currently developing a backend SQL db for an ASP website. I am only learning, so quite new to it all & would appreciate some help with the following;
I currently have a form that updates a recordset in my SQL db. This is working fine, except for the fact that when the form loads and the db tries to write the field value into my text box, e.g. Address: 20 Harbour Drive, the field value is truncated at the space and it writes the address as 20 in the text box.
When I view the detail page, no problems, the recordset was updated, but when I go back to the update page, the record values are truncated again as though the ASP page thinks the space is some kind of delimiter?
The size property of my textbox element is the same as the varchar datatype size in the SQL table.
Is there a quick way to extract a full dump of 50 tables to 50 corresponding text files?
i.e.
table_a has to be extracted to table_a.txt
table_b has to be extracted to table_b.txt
table_c has to be extracted to table_c.txt etc.
I don't want to have to add each one separately by hand in the DTSX package designer. I can't see any way to do it in a loop (because you have to do the field mapping). I can't seem to get the DTS Wizard to help - it only seems to be able to handle one table-to-text extract at any one time. And I've tried editing the DTXS file directly (in XML) but it looks like it's going to be rather complex, even if I only do it to define the connection managers. Feel free to suggest any better way to do this, though the specification has already been agreed, so I'm unlikely to be able to change it. Thanks
I€™ll truncate the destination table before each import.
Is the best way to create a temp table; get it populated then update the destination table? Or would the use of Merger (Merge Join) be the better approach.
I have to generate reports in Excel formats.I send the appropriate content type through header() function of PHP.But I can't get the <td> colors when using css sheets. Following problem occurs: 1. Image doesn't come. 2.<td> does'nt take css style. Following is the code snippet:
I have a data flow task where I have to write to a flat file. It works fine for me. But the thing is next timeI run the package it must write the data in the OLDEB source to a different copy. Usually the data is overwritten or appended to already existing data. What I want is everytime the package is run the data must be written to a different copy.
I'm trying to do this: ------------------------------------- 1) Have SQL server return some data from this simple stored procedure. It returns XML: SELECT * FROM eAccessTypes FOR XML AUTO ------------------------------------- 2) In my C# I capture the return value in an XmlReader like so: XmlReader xmlr = mySqlCommand.ExecuteXmlReader(); After this i want to write the xmlr data to a file. Any ideas
Hi Guys In my project I need to write the data from an xml file to Database through a stored precedure. On line a lot of help is available for writing data from Database to xml but I didnt see much info about writing xml to database. If any one could give me some code I will really appreciate it.(I am a newbee). This application is developed using asp.net 1.1 and sql server 2005. Thanks for your help in advance.