I am taking details from a linked server file (.XLS file). Two of the fields in this file are STARTTIME and STARTDATE.
The problem I have is that these have to be imported into a SQL table, but only into one column of type DATETIME. The column name is STARTDATE. I really have no idea on how to go about this. Any ideas would be greatly appreciated.
Hello,How do I concatinate a variable. Here's the scenarios:declare @var1 varchar(20)declare @var2 varchar(20)declare @var3 varchar(20)declare @var4 varchar(20)....declare @var32 varchar(20)set @var1 = 'Something 1'set @var2 = 'Something 2'....set @var32 = 'Something 3'/* I have to store the values of these individual variables. I wish tohave a "While" routine which iterates through the above variables. Iwish to have the variable name concatinated as that I do not have towrite numerous lines of code setting up individual 32 variables. Howcould I use the '+' operator to join 'var' + @count . Where count isfrom 1 through 32. I am having some trouble with the syntax.*/Regards,VS
I have an SSIS conversion issue. I'm pulling two tables from a DB2 database into SQL 2005. One table has a list of work orders, and the other has a list of work order comments. There is a unique identifier between the two tables so that a join can be used, however, due to size limitations, I need to be able to combine both tables.
The end result will be replicated out for SQL Mobile Edition and the file is too large when both tables exist so I am wanting to concatinate all the comments for each work order into a single text field in the work orders table.
Here is what I am wanting to accomplish:
UPDATE tblWorkOrdersSET Comments = (SELECT Comments FROM tblComments WHERE tblWorkOrders.ReqNum = tblComments.ReqNum)
I know that this statement will not work because there is a one-to-many relationship between the tables so each work order could get multiple results.
Newbie question regarding a db I have inherited. A table FullDocuments has a DocNo column with smallint data type and a SequenceNo column also with smallint data type.DocNo has numbers that represent persons. SequenceNo has numbers that represent specific documents associated with each person (DocNo).So DocNo 5 and Sequence 3 represents the 3rd document associated with person 5.My SELECT statement looks like this:SELECT ReadingNo, SequenceNoThis returns data like this: 5 3I would like to concatenate the SELECT statement to return like this: 5-3So I made Sql like this:SELECT ReadingNo + '-" + SequenceNo Which returns a alias ('No Column Named') result value of 8 which is an arithmetic result instead of a string concatination that I want.So my questions are:1. Should the original database designer have used string data types for these columns since they will never be used for math purposes?2. Do I need to cast them to string data type (like nchar(4) - since neither column will ever exceed 4 digits) to get the result I desire?3. Or can I keep them as smallint and modify my SELECT statement to allow concatination yielding a string result?
Hii,I need to concatinate two field and insert the result into each record. So far I managed to display the concatination but how do I insert it?use northwind select city, region,([city]+ +[region]) as uniqefrom customerswhere region is not nullThe resulting records in Quary Anchorage AK AnchorageAKTsawassen BC TsawassenBCVancouver BC VancouverBCSan Francisco CA San FranciscoCA
Using gridview to display the data and sql server 2000 I have a column in the database say departtime of datetime datatype that cntains the date and time resp(09/19/2007 9:00 PM). I am separating the date and time parts to display in two different textboxes say txt1(09/19/2007) contaons date and txt2(9:00 PM) contains time by using the convert in sqldatasource. Now i need to update the column in the database and i am using Updatecommand with parameters in aspx lke updatecommand = "Update table set departtime = @departtime" . How can i update my column as datetime by getting the data from 2 texboxes as now i have 2 textboxes displaying data for single column means if user edit the data in txt1 as(10/19/2007) then on click of update i need to populate the column daparttime as (10/19/2007 9:00 PM).Please let me know if you have any questions.
I currently have some SQL code that is used to build a string that is a concatination of string values across multiple rows. The subqueries in the script sometimes return NULL values so I use the following statement to change the default behavior of the concatination operator which prevents my query from returning NULL:
SET CONCAT_NULL_YIELDS_NULL ON
Here's the code snippet:
select DISTINCT
(SELECT CASE WHEN (t1.MaskValue & HDR.TranTypeID)=1 THEN ' ' + t1.description ELSE '' END FROM transactiontypes t1 WHERE (t1.MaskValue & HDR.TranTypeID)=1) +
(SELECT CASE WHEN (t2.MaskValue & HDR.TranTypeID)=2 THEN ' ' + t2.description ELSE '' END FROM transactiontypes t2 WHERE (t2.MaskValue & HDR.TranTypeID)=2) +
(SELECT CASE WHEN (t3.MaskValue & HDR.TranTypeID)=4 THEN ' ' + t3.description ELSE '' END FROM transactiontypes t3 WHERE (t3.MaskValue & HDR.TranTypeID)=4) +
(SELECT CASE WHEN (t4.MaskValue & HDR.TranTypeID)=8 THEN ' ' + t4.description ELSE '' END FROM transactiontypes t4 WHERE (t4.MaskValue & HDR.TranTypeID)=8) +
(SELECT CASE WHEN (t5.MaskValue & HDR.TranTypeID)=16 THEN ' ' + t5.description ELSE '' END FROM transactiontypes t5 WHERE (t5.MaskValue & HDR.TranTypeID)=16)) as 'Transaction Type'
The problem I am having is I need to be able to use the query above in a view used for reporting. Unfortunately, you cannot use SET CONCAT_NULL_YIELDS_NULL ON in a view. This causes my query to return NULL if any of the subqueries return NULL. I could create a function to do something similar and reference the function in the query but I can't help but think there must be a way to get this done in a single query.
Any thoughts or ideas would be greatly appreciated.
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below)Â and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, Â fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I'd like to first figure out the count of how many rows are not the Current Edition have the following:
Second I'd like to be able to select the primary key of all the rows involved
Third I'd like to select all the primary keys of just the rows not in the current edition
Not really sure how to describe this without making a dataset
CREATE TABLE [Project].[TestTable1]( [TestTable1_pk] [int] IDENTITY(1,1) NOT NULL, [Source_ID] [int] NOT NULL, [Edition_fk] [int] NOT NULL, [Key1_fk] [int] NOT NULL, [Key2_fk] [int] NOT NULL,
[Code] .....
Group by fails me because I only want the groups where the Edition_fk don't match...
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
I have query which retrieves multiple column vary from 5 to 15 based on input parameter passed.I am using table to map all this column.If column is not retrieved in the dataset(I am not talking abt Null data but column is completely missing) then I want to hide it in my report.
As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.
Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):
Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.
This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.
Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.
Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.
Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS
Basically, I'm given a daily schedule on two separate rows for shift 1 and shift 2 for the same employee, I'm trying to align both shifts in one row as shown below in 'My desired results' section.
Sample Data:
;WITH SampleData ([ColumnA], [ColumnB], [ColumnC], [ColumnD]) AS ( SELECT 5060,'04/30/2015','05:30', '08:30' UNION ALL SELECT 5060, '04/30/2015','13:30', '15:30' UNION ALL SELECT 5060,'05/02/2015','05:30', '08:30' UNION ALL SELECT 5060, '05/02/2015','13:30', '15:30'
Hello,Using SQL Server 2000, I'm trying to put together a query that willtell me the following information about a view:The View NameThe names of the View's columnsThe names of the source tables used in the viewThe names of the columns that are used from the source tablesBorrowing code from the VIEW_COLUMN_USAGE view, I've got the codebelow, which gives me the View Name, Source Table Name, and SourceColumn Name. And I can easily enough get the View columns from thesyscolumns table. The problem is that I haven't figured out how tolink a source column name to a view column name. Any help would beappreciated.Garyselectv_obj.name as ViewName,t_obj.name as SourceTable,t_col.name as SourceColumnfromsysobjects t_obj,sysobjects v_obj,sysdepends dep,syscolumns t_colwherev_obj.xtype = 'V'and dep.id = v_obj.idand dep.depid = t_obj.idand t_obj.id = t_col.idand dep.depnumber = t_col.colidorder byv_obj.name,t_obj.name,t_col.name
I am working on a Statistical Reporting system where:
Data Repository: SQL Server 2005 Business Logic Tier: Views, User Defined Functions, Stored Procedures Data Access Tier: Stored Procedures Presentation Tier: Reporting ServicesThe end user will be able to slice & dice the data for the report by
different organizational hierarchies different number of layers within a hierarchy select a organization or select All of the organizations with the organizational hierarchy combinations of selection criteria, where this selection criteria is independent of each other, and also differeBelow is an example of 2 Organizational Hierarchies: Hierarchy 1
Country -> Work Group -> Project Team (Project Team within Work Group within Country) Hierarchy 2
Client -> Contract -> Project (Project within Contract within Client)Based on 2 different Hierarchies from above - here are a couple of use cases:
Country = "USA", Work Group = "Network Infrastructure", Project Team = all teams Country = "USA", Work Group = all work groups
How to implement the data interface (Stored Procs) to the Reports Implement the business logic to handle the different hierarchies & different number of levelsI did get help earlier in this forum for how to handle a parameter having a specific value or NULL value (to select "all") (WorkGroup = @argWorkGroup OR @argWorkGrop is NULL)
Any Ideas? Should I be doing this in SQL Statements or should I be looking to use Analysis Services.
I am planning to use transacational replication (instead of merge replication) on my SQL server 2000. My application is already live and is being used by real users.
How can I ensure that replicated data on different server would have exact same values of identity columns and date columns (where every I set default date to getdate())?
It is very important for me to have a mirror image of data (without using clustering servers).
Basically I need to get the SUM of the sum of three columns and all three columns have nulls. To make it more complicated, the result set must return the top 20 in order desc as well.
I keep facing different issues whether I try and use Coalesce, IsNull, Sum, count, anything. My query never returns anything but 0 or NULL regardless of if I am trying to build a CTE or just use a query.
So I'm using Col A to get the TOP 20 in order (which is fine) but also trying to add together the sums of Col A + Col B + Col C for each of the twenty rows...
I'd like to generate the calculated column SCORE based on various scenarios in the other columns. eg.
if n1<10 and n2<10 then i=i + 1 if n4-n3=1 then i=i + 1 if more than 2 consecutive numbers then i=i + 1
So, I need to build the score. I've tried the procedure below and it works as a pass or fail but is too limiting. I'd like something that increments the variable @test1.
declare @test1 int set @test1=0 select top 10 n1,n2,n3,n4,n5,n6, case when ( n1=2 and n2>5 ) then @test1+1 else @test1 end as t2 from allNumbers
I have a scenario that reminds me of a pivot table and I am wondering if there is a way to handle this in SQL.
I have four tables. Product Line, Item, Property, and Value.
A Product Line has many items and an item can have many property's and a property can have many values.
I want to select a product line and show all the items with the Property's as column headers and the Values as the data. The thing I am having trouble with is the property's for an item are variable from a few to a whole bunch.
Just to confirm, do identity columns and XML columns work OK with database mirroring ? That is, all data types are supported with mirroring, and identities aren't an issue ?
Transactional replication with identity columns was a right pain in the **** in SQL 2000. I'm assuming that mirroring doesn't have these issues, but want to be sure.
I have a problem and i wish i can get the answers or advices to solve it.
i have like 20 excel files and in each file there is 1 sheet (Planning) . What i need to do is to loop on the on the 20 files (actually this is the easy part and i already done it) the hard part is while looping i need to open each excel file and loop on the 256 columns in it and extract the data from it to a SQL server Database.
Can someone please explain the difference between Output and External columns? I can't fathom why "Output" columns aren't good enough. In other words, what is there a need or value in having two types of "output" columns?
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
TRANAMT being the amount paid & TOTBAL being the balance due per the NAMEID & RMPROPID specified.The other table includes a breakdown of the total balance, in a manner of speaking, by charge code (thru a SUM(OPENAMT) query of DISTINCT CHGCODE
Also with a remaining balance (per CHGCODE) column. Any alternative solution that would effectively split the TABLE1.TRANAMT up into the respective TABLE2.CHGCODE balances? Either way, I can't figure out how to word the queries.
Adding more columns in a matrix report that don€™t belong to the columns drilldown dimensions€¦
That is, for example, having the following report:
Product Family
Product
Country City Number of units sold
Then I would add some ratios, that is, Units Sold/Months (sold per month) and other that is the average for Product Family (Units Sold/Number of Product Family), for putting an example€¦ some columns should be precalculated prior to the report so do not get into it, the real problem I don€™t see how to solve is adding one or two columns for showing these calculated column that doesn€™t depend on the column groups but they do for the rows groups€¦
Any guidance on that?
The only way I am seeing by now is to set it as two different reports, and that is not what my client wants€¦
Please note that the number columns are different in each table. I wanted to dump the data of Source table to Destination table. I meant to say that the rows of 2 columns in Source table to last 2 rows of Destination table. And also my oreder of the columns in Destination table will vary. So i need to a way to dynamically insert the data in bulk. but i will know the column names for sure before inserting.
Is there anyway to bulk insert into these columns.
I need to start encrypting several fields in a database and have been doing some testing with a test database first. I've run into problems when attempting to restore the database on either the same server (but different database) or to a separate server.
First, here's how i created the symmetric key and encrypted data in the original database:
create master key encryption by password = 'testAppleA3';
create certificate test with subject = 'test certificate', EXPIRY_DATE = '1/1/2010';
create symmetric key sk_Test with algorithm = triple_des encryption by certificate test;
open symmetric key sk_Test decryption by certificate test;
insert into employees values (101,'Jane Doe',encryptbykey(key_guid('sk_Test'),'$200000')); insert into employees values(102,'Bob Jones',encryptbykey(key_guid('sk_Test'),'$500000'));
select * from employees --delete from employees select id,name,cast(decryptbykey(salary) as varchar(10)) as salary from employees
close all symmetric keys
Next I backup up this test database and restore it to a new database on a different server (same issue if restore to different database but on same server).
Then if i attempt to open the key in the new database and decrypt:
open symmetric key sk_Test decryption by certificate test;
I get the error: An error occurred during decryption.
Ok, well not unexpected, so reading the forums, i try doing the below first in the new database:
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
Then I try opening the key again and get the error again:
An error occurred during decryption.
So then it occurs to me, maybe i need to drop and recreate it so i do
drop symmetric key sk_test
then
create symmetric key sk_Test with algorithm = triple_des encryption by certificate test;
and then try to open it.
Same error!
So then i decide, let's drop everything, the master key, the certificate and then symmetric key:
drop symmetric key sk_test drop certificate test drop master key
Then recreate the master key:
create master key encryption by password = 'testAppleA3';
Restore the certificate from a backup i had made to a file:
CREATE CERTIFICATE test FROM FILE = 'c:storedcertsencryptiontestcert'
Recreate the symmetric key again:
create symmetric key sk_Test with algorithm = triple_des encryption by certificate test;
And now open the key only to get the error:
Cannot decrypt or encrypt using the specified certificate, either because it has no private key or because the password provided for the private key is incorrect.
So what am I doing wrong here? In this scenario I would appear to have lost all access to decrypt the data in the database despite restoring from a backup which restored the symmetric key and certificate and i obviously know the password for the master key.
I also tried running the command
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
I have a report which is a list of items and I display everything about the item. It is great. My report table in the layout tab is simple. Header,Detail,Footer. Each Item has 65 columns. The number of items (rows) vary upon what you want to see. Example data. Item#, Description, CaseSalePrice, Cost, BottleSalePrice, Discount 123, Grenadine, 100.00, 75.00, 15.50, 2.00 456, Lime Juice, 120.00, 81.00, 17.25, 2.00
What I am actually doing is running this the top example and saving to excel. Then copying the sheet. Creating a new sheet then doing a paste special transpose and this gives the users what they want to see.
I want to grab that table object in the report layout tab and twist it 90degrees so the header is on the left, detail is in the middle and the footer is on the right. It would be perfect.
The dynamic column need is really the problem here. I never know how many items will be in the report. They all have the same basic information like description and pricing.
I am all out of creative ideas, any help would be appreciated.
I'm in the process of converting legacy DTS packages to SSIS. I need to populate a table that has more fields than the source file. In DTS I did this with an ActiveX script. How do I go about doing this within SSIS.
In the ActiveX script most of the fields were defaulted with either spaces or zeroes.
One of the Destination fields needs to be incremented by 1 for each new record inserted.
friends, could any one help me out to explain how can i create columns in sql when i enter a charcter value in text box? let say,if i enter A, one column need to be create,if B two need to be created to an existing table.... how can i do this??? Thanks for any help