Hi please help me out !!
I have to update the incidents table.
The initiator and initdept are two corresponding columns in incidents table whoes value I have to get from another table Employee where (initiator means Empname and initdept means title)
if i just do
select Empname,title from employee
it gives me correct output while
select initiator,initdept from incidents
It gives the initiator correct but instead of giving the initdept
it gives output as initaiator
The Output is given below
initiator initdept
Jeff C. Taylor Jeff C. Taylor
Randy S. Jonas Randy S. Jonas
Mike lewis Mike lewis
it should give the out put like
initiator initdept
Jeff C. Taylor Software Engg
Randy S. Jonas Tester
I hope you got my question!
now I hav to update incidents table such that it should give initiator as well as its corrosponding initdept
I have encountered a problem doing an update on a Field defined as DECIMAL with dbcursor().
I have opened a cursor with dbcursoropen() with option LOCK_CC set. Then I have bound some char variables to CHAR Fields with STRINGBIND and some double Variables to the DECIMAL fields with FLT8BIND.
When I read the cursor, the data is correct in the bound variables. But when I do an update, all data for the numeric fields are gone and the database contains only NULL Values. The Strings are updated correctly.
I have tried to set the length, I am using the poutlen-variables with a value other than 0, but nothing helps.
HiI have a table called cur and i want one of the fields to contain a value of two other fields multiplied togetherTable 'cur' Fields ID, A, B, Cfield C must = A*BI have this so far and it works but updates ALL of the fields C rows______________________________________________ALTER TRIGGER TG_cur ON cur AFTER INSERTAS BEGINUpdate curSET C = (i.A*i.B)from inserted i_______________________________________________RESULT:ID|A|B|C1 - 3 - 4 -202 - 6 - 6 -203 - 8 - 9 -204 - 2 - 3 -205 - 4 - 5 -20I think need a where statement at the bottom but the followings does not workwhere ID = i.IDREQUIRED RESULTID|A|B|C1 - 3 - 4 -122 - 6 - 6 -363 - 8 - 9 -764 - 2 - 3 -65 - 4 - 5 -20any help would be greatmany thanksBil
We have installed SQL 7.0 Client to connect to the SQL server 7.0. After installing the client (which updated all ODBC drivers) the database connections, even ODBC calls using MS access drivers become extremely slow. Any hints where to look?
how can i retrieve two columns from sqltable with - seperating the results and a common column name someth like this select id ,name from user id name 1 a 2 b 3 c i need the result set to be my_reports 1-a 2-b 3-c is that possible, if so, could anyone tell me how to do?
In order to allow my customers to use their existing data and install updated application, I want to be able to add columns to existing CE tables at startup time, and allow the user to enter data into the new column(s) immediately. I used the ALTER TABLE command as shown in the example below:
cmdexe = cmd.ExecuteNonQuery() To add the column, But this does not set the columns in the attached dataset. The program sees the columns and data is able to be added, but is not then in the underlying table? What else do I need to do?
Hi I have a 2 Tables EMP, STU In EMP Table there is a Column "Country" In STU Table there is a Column "City" Where the EMPID = STUID on this conditon how can i update those 2 columns
This is for one Table one column update how can i do 2 at a time I don't want to do in 2 seperate UPDATE statements
UPDATE EMP SET EMP.Country = 'USA' WHERE EMP.EMPID = STU.STUID
I AM HAVING A TABLE WHICH HAS INCREMENTAL COLUMNS,WHERE COLUMNS GETS ADDED EVERY MONTH TO THE TABLE AND THE TABLE THEN CONTAINS PREVIOUS MONTH AND PRESENT MONTH DATA ABOUT CUSTOMERS ,DETAILS AND TRANSACTIONS. THE PROBLEM WITH THIS DATA IS ,IF THE CUSTOMER IS NEW ,THEN IN PREVIOUS MONTHS HIS INFORMATION IS NULL,WHICH HAVE TO BE CODED HAS "NOT PRESENT".
NOW, HOW DO WE CONVERT ALL THE PREVIOUS COLUMNS FOR A PARTICULAR CUSTOMER HAS NULL AT THE SAME TIME ?.
HERE IS HOW THE PROC WRITTENED FOR IT GOES :-
DROP PROCEDURE DE_NAT CREATE PROCEDURE DE_NAT AS BEGIN DECLARE @MONMIN1 NVARCHAR(100),MON NVARCHAR(100),@YEAR NVARCHAR(100) , @MONYEAR NVARCHAR(100) SET @MONMIN1 = DATENAME((MONTH),DATEADD(MONTH,-1,GETDATE())) SET @MON = MONTH(GETDATE()) SET @YEAR = YEAR(GETDATE()) SET @MONYEAR = @MON + @YEAR
EXEC('select A.CUSTOMERS,B.*,CAST(A.RFM_40D AS FLOAT) AS R40 INTO TSD_' + @MONYEAR + ' from TSD_20 A LEFT OUTER JOIN SD20 ' + @MONMIN1 + ' B ON A.CUSTOMERS = B.CUSTOMER') END
THIS PROC JUST ADDS THE PRESENT MONTHS DATA TILL LAST MONTHS DATA.
BUT IF A CUSTOMER IS NEW, THEN HOW DO I REPLACE THE NULL VALUES FOR THE PREVIOUS DATA TO 'NOT PRESENT'
FOR EG :- IF THERE IS A NEW CUSTOMER ,HOW DO WE CHANGE :-
NOW, AS YOU CAN SEE, THAT FOR CUSTOMERS = '101023'. THE COLUMN DFGHHGFD IS, THIS MONTHS DATA , I WANT TO CHANGE ALL NULL VALUES PRESIDING IT AS "INACTIVE"
CAN I CHANGE , ALL COLUMNS FROM NULL TO "INACTIVE" , AT THE SAME TIME. ?
AS NEXT MONTH, AGAIN THE COLUMNS IS GONNA INCREASE WHICH WILL AGAIN CAUSE A PROBLEM .
PLS TELL ME A METHOD , SO THAT I CAN DO THE NEEDFUL.
I have Table1 with 2 columns Label_ID and Athlete_ID, I have another Table2 with 3 columns Label_ID, Athlete_ID, Data. I need join this tables so the result table will have the same number of rows as Table1 and have extra column add Data which will correspond to Data in Table2 if Label_ID an Athlete_ID are matched and NULL if no matches found. I have following query which does not produce desired result
SELECT Table1.label_id, Table1.athlete_id, data FROM Table1 LEFT OUTER JOIN Table2 on (Table1.label_id = Table2.label_id AND Table1.athlete_id = Table2.athlete_id)
The end result of this is table with only rows where label_id and athlete_id are matched between tables but no results when they are not. I expected OUTER JOIN to have those result but it's not working for whatever reason. I'm pretty sure it's simple solution but can not figure out myself.
If I have some mdx I'm using in reporting services like this:
select { [TimeByMinute].[All TimeByMinute].[2005].[May].[1] : [TimeByMinute].[All TimeByMinute].[2005].[May].[6] } on columns, {A_list_of_measures } on rows from ACD_Calls
The column names are unique to the day of month- which means when I use a table to display this in reporting services, the field names change dynamically when the date parameters change which means the table stops working.
I'll post this in reporting services too but I thought maybe I could alias the column names in mdx shielding the reporting services table from changes in dates.
What do you think? Would a matrix be more flexible in this case?
TransactionsImport (which is the destination table) TransactionsImportDelta
I need to do the following:
Get the records with the latest date and time in the destination table TransactionsImport Get the records with the latest date and time in the destination table TransactionsImportDelta table Insert the records from the TransactionsImportDelta table into TransactionsImport that have a greater date & time than the current records in TransactionsImport table.
Problem is date & time are in separate columns:
Table structure:
Date Time ID 2011121305154107142201008300100 2011121305154122B1L13ZY0000A07YD 2011121304504735142201090002600 2011121304504737142201095008300 2011121304504737142201090002600
I have a simple table as shown:  I want to have values on the last column to represent the time interval between the 2 date columns (visits); i.e for event-ID 2 for example, I will have
entry(EventID = 2) Â - exit(EventID = 1), and so on
In my DB, many of my tables have a column named upsize_ts I have already been told that it is not linked in any way with the data in the DB. The data type of these columns is timestamp and every record in those colum is this : "<Binary>" Now maybe it's because the whole DB comes from an export from an Access DB, or maybe not. I just had a few questions:
1/ Does anyone know why that column got generated and what it means? (optional) 2/Now the real question: is there any statement allowing me to "mass-drop" all those columns named "upsize_ts" in every table of my DB at the same time? Or at least any way not to do it one drop table at a time?
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?