Is there a way to extract the time from a field like this
2005-11-14-14:30:00:000
I just want to use the 14:30 out of that field but cant seem to find the right way to do it. I can use datename and get the hour but i need it to the min.
I'm having a bit of trouble with dates/times. All I want to do is extract the contents of a table in an old Informix database and load it in to table in a SQL 2005 database.
The problem is that when I try to extract from the source table, I get the following error:
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E07 Description: "Error converting data type DBTYPE_DBTIMESTAMP to datetime.".Error: 0xC004701A at Dump Dacom Tables, DTS.Pipeline: component "mac_header" (181) failed the pre-execute phase and returned error code 0xC0202009.
It would have been nice if it could tell me what columns it was having trouble with. Anyway, I figured it out. There are two columns called time_open and time_close that contain times but no dates. This is how the source columns are configured:
time_open datetime hour to minutetime_close datetime hour to minute
This is a really old database. We started using it about 14 years ago. I can get an ODBC connection to it fine with any application except SQL Server. So, I have to do an OPENROWSET to get access to the data.
SELECT * FROM OPENROWSET('MSDASQL', 'Connection String Stuff', 'select * from dacom.mac_header') Question 1: Why can't I get an ODBC connection to this Informix 5.10 database?
Question 2: How can I extract tables with this sort of data in it? I'm happy to store it as a normal date/time field in the destination SQL 2005 database, but I can't convert something that I can't extract.
I have two tables - a stock table and a transaction table. The stock table is called stock and the transaction table is called trans. The table make a point of sale database. Both table have a joining field called prod_no.
I want to extract the stock line by querying stock.prod_no that are not present in the trans table for a period of time specified.
Basically, I want to find dead stock lines that have not been sold and, therefore, are not in the transaction table but that have a stock figure. I have to enter in a start and end date paramater using the ddate field in the trans table. I have looked at left, right and outer joins but with no luck and don't even know if this is the correct query type to use.
I want to see all the product line provided they are there in the table or not, i.e.
Product Line Product Line Description Total Usage Quantity
10000 Raw Material 0
20000 Intermediate Product 0
30000 Bearing 2713321
32000 High Strength 4258197
34000 High Temp 0
36000 Corrosion Resistant 639492
50000 High Speed 1452153
52000 Die Steel Hot Work 1614727
54000 Die Steel Cold Work 464943
88000 Misc 0
90000 Conversion 0 This is the result i am looking out for now the problem is that those record are not there in table so naturally they will not be shown in the result. Can any one help me modifying below query? so that i can get the aforesaid result
Code Block
SELECT [PRODL] as 'Product Line' , (case when prodl = 10000 then 'Raw Material' when prodl = 20000 then 'Intermediate Product' when prodl = 30000 then 'Bearing' when prodl = 32000 then 'High Strength' when prodl = 34000 then 'High Temp' when prodl = 36000 then 'Corrosion Resistant' when prodl = 50000 then 'High Speed' when prodl = 52000 then 'Die Steel Hot Work' when prodl = 54000 then 'Die Steel Cold Work' when prodl = 88000 then 'Misc' when prodl = 90000 then 'Conversion' end) as 'Product Line Description'
,sum(case when [PRODL] = 10000 then [Usage Qty] when [PRODL] = 20000 then [Usage Qty] when [PRODL] = 30000 then [Usage Qty] when [PRODL] = 32000 then [Usage Qty] when [PRODL] = 34000 then [Usage Qty] when [PRODL] = 36000 then [Usage Qty] when [PRODL] = 50000 then [Usage Qty] when [PRODL] = 52000 then [Usage Qty] when [PRODL] = 54000 then [Usage Qty] when [PRODL] = 88000 then [Usage Qty] when [PRODL] = 90000 then [Usage Qty] end) as 'Total Usage Quantity'
FROM [LATCUBDAT].[dbo].[RMU] group by prodl order by prodl
Result of The Aforesaid Query
Product Line Product Line Description Total Usage Quantity
I am using the following to extract the column names of a table. I would like to do this for the whole database. Currently I am cutting the results into an excel spread. Is there a better way of doing this? Here is the query
SELECT name FROM syscolumns WHERE [id] = OBJECT_ID('tablename')
My data source has some columns I have to 'translate' first and then insert into my destination table.
Example Source data: key size height 1 'Small' 'Tall' 2 'Big' 'Short' has to become 1 'Y' 'P' 2 'N' 'D'
I thought of creating a lookup table (I'm talking about the real table, not a lookup transformation table) that would have these columns: column name, value_source, value_dest Example: col_name vl_source vl_dest size 'Small' 'Y' size 'Big' 'N' height 'Tall' 'P' ... and so on, I believe you get the point
How would you extract the needed values? Can I use a select statement in a derived column expression? Any ideas? I'm not really fond of the idea to create n lookups, one for each column I have to translate, is there a slicker solution?
Hello. I have a somwhat simple question. I have a table inside my database where i have some columns. Id like to extract distinct values from one column and inser them into another. My table is named article and id like to slect all the destinct values from the column naned type inside my article table. Then i need to insert my values into a new table called type but i would also like the to have 2 columns inside my type table. 1 called counter witch is an auto increment PK, and another one named type where the results from my query would be inserted. Iv tried a veriety of querys but none of them have managed to do this. Could anyone help me construct this query?
I have been placed in the position of administering our SQL server 6.5 (Microsoft). Being new to SQL and having some knowledge of databases (used to use Foxpro 2.6 for...DOS!) I am faced with an ever increasing table of incoming call information from our Ascend MAX RAS equipment. This table increases by 900,000 records a month. The previous administrator (no longer available) was using a Visual Foxpro 5 application to archive and remove the data older than 60 days. Unfortunately he left and took with him Visual Fox and all of his project files.
My question is this: Is there an easy way to archive then remove the data older than 60 days from the table? I would like to archive it to a tape drive. We need to maintain this archive for the purposes of searching back through customer calls for IP addresses on certain dates and times. We are an ISP, and occasionally need to give this information to law enforcement agencies. So we cannot just delete it.
I am trying to extract what I believe is called a Node from a column in a table that contains XML. What's the best way to do this? It is pretty straightforward since I'm not even looking to include a WHERE clause.
What I have so far is:
SELECT CCH.OrderID, CCH.CCXML.query('/cc/auth') FROM tblCCH AS CCH
Hello,I am currently working on a monthly load process with a datamart. Ioriginally designed the tables in a normalized fashion with the ideathat I would denormalize as needed once I got an idea of what theperformance was like. There were some performance problems, so thedecision was made to denormalize. Now the users are happy with thequery performance, but loading the tables is much more difficult.Specifically...There were two main tables, a header table and a line item table. Thesehave been combined into one table. For my loads I still receive them asseparate files though. The problem is that I might receive a line itemfor a header that began two months ago. When this happens I don't get aheader record in the current month's file - I just get the record inthe line items file. So now I have to join the header and line itemtables in my staging database to get the denormalized rows, but I alsomay have to get header information from my main denormalized table(~150 million rows). For simplicity I will only include the primarykeys and one other column to represent the rest of the row below. Thetables are actually very wide.Staging database:CREATE TABLE dbo.Claims (CLM_ID BIGINT NOT NULL,CLM_DATA VARCHAR(100) NULL )CREATE TABLE dbo.Claim_Lines (CLM_ID BIGINT NOT NULL,LN_NO SMALLINT NOT NULL,CLM_LN_DATA VARCHAR(100) NULL )Target database:CREATE TABLE dbo.Target (CLM_ID BIGINT NOT NULL,LN_NO SMALLINT NOT NULL,CLM_DATA VARCHAR(100) NULL,CLM_LN_DATA VARCHAR(100) NULL )I can either pull back all of the necessary header rows from the targettable to the claims table and then do one insert using a join betweenclaims and claim lines into the target table OR I can do one insertwith a join between claims and claim lines and then a second insertwith a join between claim lines and target for those lines that weren'talready added.Some things that I've tried:INSERT INTO Staging.dbo.Claims (CLM_ID, CLM_DATA)SELECT DISTINCT T.CLM_ID, T.CLM_DATAFROM Staging.dbo.Claim_Lines CLLEFT OUTER JOIN Staging.dbo.Claims C ON C.CLM_ID = CL.CLM_IDINNER JOIN Target.dbo.Target T ON T.CLM_ID = CL.CLM_IDWHERE C.CLM_ID IS NULLINSERT INTO Staging.dbo.Claims (CLM_ID, CLM_DATA)SELECT T.CLM_ID, T.CLM_DATAFROM Staging.dbo.Claim_Lines CLLEFT OUTER JOIN Staging.dbo.Claims C ON C.CLM_ID = CL.CLM_IDINNER JOIN Target.dbo.Target T ON T.CLM_ID = CL.CLM_IDWHERE C.CLM_ID IS NULLGROUP BY T.CLM_ID, T.CLM_DATAINSERT INTO Staging.dbo.Claims (CLM_ID, CLM_DATA)SELECT DISTINCT T.CLM_ID, T.CLM_DATAFROM Target.dbo.Target TINNER JOIN (SELECT CL.CLM_IDFROM Staging.dbo.Claim_Lines CLLEFT OUTER JOIN Staging.dbo.Claims C ON C.CLM_ID =CL.CLM_IDWHERE C.CLM_ID IS NULL) SQ ON SQ.CLM_ID = T.CLM_IDI've also used EXISTS and IN in various queries. No matter which methodI use, the query plans tend to want to do a clustered index scan on thetarget table (actually a partitioned view partitioned by year). Thenumber of headers that were in the target but not the header file thismonth was about 42K out of 1M.So.... any other ideas on how I can set up a query to get the distinctheaders from the denormalized table? Right now I'm considering usingworktables if I can't figure anything else out, but I don't know ifthat will really help me much either.I'm not looking for a definitive answer here, just some ideas that Ican try.Thanks,-Tom.
Hello,I'm creating an audit table and associated triggers to be able to captureany updates and deletes from various tables in the database. I know how tocapture the records that have been updated or deleted, but is there any waythat I can cycle through a changed record, look at the old vs new values andcapture only the values that have changed?To give you a better idea of what I'm trying to do, instead of creating acopy of the original table (some tables have many fields) and creating awhole record if a type or bit field has been changed, I'd like to onlycapture the change in a single audit table that will have the followingfields;AuditID int INDENTITY(1,1)TableName varchar(100)FieldName varchar(100)OldValue varchar(255)NewValue varchar(255)AuditDate datetime DEFAULT(GetDate())Any direction would be greatly appreciated.Thanks!Rick
I want to calculate the target based on Flag value if Flag value is "Y" ....than MAX(Customer Target) else MAX(SLA target).Flag column contains "Y" , "N" and some blank values . Flag, Customer Target and SLA target are the columns in Table1. I have used the below formulas
I wanted to know that when I create a table using SQL Server, does the table metadata like schemaname and catalogname is stored in memory or in hard-disc. I am trying to extract schemaname and catalogname when the database is down or correctly when I have no connection string. I know about sysindexes, systables etc, but i am not able to understand whether it stores the data which is useful to me like catalogname.
sql query to extract the column names from sysindexes?
I'm trying extract a column from the table based on certain Conditions: This is for PowerPivot.
Here is the scenario:
I have a table "tb1" with (project_id, month_end_date, monthly_proj_cost ) and table "tb2" with (project_id, key_member_type, key_member, start_dt_active, end_dt_active).
I would like to extract Key_member where key_member_type="PM" and active as of tb1(month_end_date).
I would like to return the nearest date of Table B in my table like for
ID W001 in table B should return ID A002 CreatedDatetime: 2014-06-03 20:05:48.000 ID W002 in table B should return ID A004 CreatedDatetime: 2014-06-04 01:05:48.000
I want to compare two columns in the same table called start date and end date for one clientId.if clientId is having continuous refenceid and sartdate and enddate of reference that I don't need any caseopendate but if clientID has new reference id and it's start date is not continuous to its previous reference id then I need to set that start date as caseopendate.
So my data column [EODPosting].[MatchDate] is defined as a DATE column. I am trying to SELECT from my table where [EODPosting].[MatchDate] is today's date.
Is this not working because GETDATE() is like a timestamp format? How can I get this to work to return those rows where [EODPosting].[MatchDate] is equal to today's date?
Hello All, i have three textboxes in a page and i want fill those textboxes with the date, month,year respectively..... i have a datecreated column in discount table in a mm/dd/yy format ...how to extract the date, month, year from this format and put the value in textboxes..? Any help.. Thanks.. Anne
can someone help me with th best way to look up a date in date dimension and populate the date id in fact. in the source date is dd/mm/yyyy and in date dimension columns are date id , year , quarter , month, day
I cannot create a measure that returns results for dates that do not exist in the fact table despite the fact that the components included in the measure contain valid results for these same dates.Creature a measure that counts the number of days where the "stock qty" is below the "avg monthly sales qty for the last 12 months" (rolling measure).Here is the DAX code I have tried for the measure (note that filter explicitly refers to the date table (called Calendar) and not the fact table):
Below you can see the sub measures (circled in red) are giving results for all days in the calendar.Highlighted in yellow are dates for which the StkOutCnt measure is not returning a result. Having investigated these blank dates, I am pretty confident that they are dates for which there are no transactions in the fact table (weekends, public holidays etc...).why I am getting an "inner join" with my fact table dates despite the fact that this is not requested anywhere in the dax code and that the two sub measures are behaving normally?
SO when i try to load from Master table to parent and child table i am using using expresssion like
select B.ID,A.* FROM FLATFILE_INVENTORY AS A JOIN DMS_INVENTORY AS B ON A.ACDealerID=B.DMSDEALERID AND A.StockNumber=B.STOCKNUMBER AND A.InventoryDate=B.INVENTORYDATE AND A.VehicleVIN=B.VEHICLEVIN WHERE convert(date,A.[FtpDate]) = convert(date,GETDATE()) and convert(date,B.Ftpdate) = convert(date,getdate()) ;
If i use this Expression i am getting the current system date data's only from Master table to parent and child tables.
My Problem is If i do this in my local sserver using the above Expression if i loaded today date and if need to load yesterday date i can change my system date to yesterday date and i can run this Expression.so that yeserday date data alone will get loaded from Master to parent and child tables.
If i run this expression to remote server i cannot change the system date in server.
while using this Expression for current date its loads perfectly but when i try to load yesterday data it takes current date date only not the yesterday date data.
What is the Expression on which ever date i am trying load in the master table same date need to loaded in Parent and child table without changing the system Date.
I have a scenario where I have to Update a table with date when there are new records in another table
For example:
I load ODS table with the data from a file in SSIS. the file has CustomerID and other columns.
Now, when there is new record for any customerID in Ods, then Update the dbo table with the most recent record for every CustomerID(i.e. update the date column in dbo for that customerID). Also Include an Identifier that relates back to the ODS table. How do I do this?
need help how to archiv table to another table with unique number for all rows once + date time (not the second only day time +minute) i need whan i insert to the another table add 2 more fields (unique number , date_time )
this is the table 1 i select from ID fname new_date val_holiday ----------------------------------------------------
this is the table 2 i insert into ---------------------------------- ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------
for evry archiv table to another table (insert) i need to get a unique number + date time (not the second only day time +minute)
next insert ...... ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------
next insert ...... ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------