I have a query, I am trying to update a certain column in my query you can see that is hard coded. The column that I am trying to update is "O_Test" I used a select statement trying to figure out how many records that accounts for with the entire database of all the tables. I got 643 records. So I am wondering if there is a way I can update all the columns without looking up each table and updating each one. This Update statement wont work because I am accounting for all records in the DB of all tables associated of what I hard coded
SELECT t.name AS table_name, SCHEMA_NAME(schema_id) AS schema_name, c.name AS column_name FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name LIKE '%O_Test%' ORDER BY schema_name, table_name;
I have a table and a specific column inside this table. I know this table is being updated, by using sys.dm_db_index_usage_stats, I was able to determine this, by some process (stored procedure / SQL Job / etc), but the problem is, I am not sure what process is doing it.
How would I search our SQL Server 2008 database to find any process that manipulates this table / column (I only care about Inserts / Updates and Deletes, but do not really care for SELECT).
I ran a db update script to update the datatype for a column from a tinyint to a smallint:
ALTER TABLE MyTable ALTER COLUMN Age smallint
Then I ran another db update script to update the value in these 2 columns:
update MyTable set Age = Age * 12,
However, the second db update script returns the following error:
Msg 220, Level 16, State 2, Line 1 Arithmetic overflow error for data type tinyint, value = 1440. The statement has been terminated.
SSMS shows the column with the new data type of smallint. I even restarted SSMS but I still get the error above. 1440 is out of range for a tiny int but should be in range for a smallint. Both scripts need to be executed together in sequence by the release manager.
I have a table with 201 columns . I am importing 200 columns from a file using DFT. I want update the 201th column with the fileId of the file that just imported. I am storing the fileId of the file in varFileID .
How do I go back and update the 201th column ( column name sfileId) with the varFileID value?
I can use Execute SQL Task but how will I know it's the records of the files that I just imported not other rows.
I have a table with 10 rows with a varbinary column
I wish to concatenate all the binary column into a single binary column and then write that to another table within the database. This application splits a binary file (Word or PDF document) into multiple segments (this is Column2 as below)
I want to store Images as binary data in SQL table and compare it each time with a image file I am getting. I've tried below approach but getting error:
DROP TABLE #BLOBTest CREATE TABLE #BLOBTest ( TestID int IDENTITY(1,1), BLOBName varChar(50), BLOBData varBinary(MAX) );
[Code] ....
Error: Msg 4861, Level 16, State 1, Line 10 Cannot bulk load because the file "C:Files12656.jpg" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 15105).
I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.
A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.
I need to update multiple columns in a table with multiple condition.
For example, this is my Query
update Table1 set weight= d.weight, stateweight=d.stateweight, overallweight=d.overallweight from (select * from table2)d where table1.state=d.state and table1.month=d.month and table1.year=d.year
If table matches all the three column (State,month,year), it should update only weight column and if it matches(state ,year) it should update only the stateweight column and if it matches(year) it should update only the overallweight column
I can't write an update query for each condition separately because its a huge select
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
I am getting an error importing a csv file both using SSIS and SSMS. The csv is comma delimited with quotes for text qualifiers. The file gets partially loaded and then gives me an error stating The column delimiter for column "MyColumn" was not found. In SSIS it gives me the data row which is apparently causing the problem but when I look at the file in a text editor at the specific row identified the file has the comma delimiter and it looks fine. I am using SQL Server 2008.
I have run into a perplexing issue with how to UPDATE some rows in my table correctly.I have a Appointment table which has Appointment Times and a Appointment Names. However the Name is only showing on the Appt start Time line. I need it to show for its duration. So for example in my DDL Morning Appt only shows on at 8:00 I need it to show one each line until the next Appt Starts and so on. In this case Morning Appt should show at 8:00,8:15, 8:30.
I'm getting the following error message 'An aggregate may not appear in the set list of an UPDATE statement' What is the proper way to carry out an update on aggregates?
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
I have a table where I would like to update the document number row for 3k rows. The problem I have is that the documents come in sets of two (version 1 and 2) but both have different numbers. Picture it like this below:
DOCNUM: 4445787 Version 1 DOCNUM: 4445790 Version 2
It should be the same docnum (ie 4445787 Version 1, 4445787 Version 2).
The challenge is how can we assign the new docnum for version 1 to be also for version 2 as well. Basically in SQL we need a way to
1. Find a way to distinguish the pair of documents in the target db that are the same even though they have different docnums.
I have a new production server with about 100 jobs on it. These are ETL jobs about half of which are in SSIS packages and half call stored procedures directly. For those calling stored procedures directly, might there be a way to use the system catalog to link the schedule of the job to the table being updated by the procedure? The result is a list of tables (those updated by jobs) and the schedule of when they are updated.
SQL 2012 Standard VPS Windows 2012 Server Standard
I am experimenting with using CDC to track user changes in our application database. So far I've done the following:
-- ENABLE CDC ON DV_WRP_TEST USE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO
-- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TEST USE dv_wrp_test
[Code] ....
The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
if exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdate
[Code] ....
When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:
-- GET CDC RESULTS FOR CHANGES TO AVA TABLE USE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO
--RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4 --__$start_lsn__$end_lsn__$seqval__$operation__$update_maskAvaKeyAvaDescAvaArrKeyAvaSAPAppellationID --0x0031E84F000000740008NULL0x0031E84F00000074000230x02119Test26NULL --0x0031E84F000000740008NULL0x0031E84F00000074000240x02119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000310x0F119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000420x0F119Test46NULL
Why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?
I have a stored procedure that updates a table. I also have an UDF that allows dirty reads (nolock).
What's the precedence level in SQL server? If I add xlock,rowlock to the update statement, will the dirty read wait for the update transaction to commit, or will it perform a dirty read regardless of the locking scheme in the update statement?
If a column is set to allow nulls I know that a constraint can be used to supply a default (i.e. GetDate() ) when no value is provided but what about when an explicit NULL is provided in an INSERT or UPDATE statement?Is there any way other then an AFTER trigger to substitute a value for an explicitly provided NULL? In other words, assuming that dtAsof is a NULL enabled column, is there any way to over ride what the following will do to MYTABLE:
If there's no way to do this in SQL Server 2008R2 then what about later versions of SQL Server? Do any more recent versions have a way to deal with this? We have a third party app that uses a SQL Server back end and many of the tables have columns for storing audit like data such as date/time but many are left to NULL values and I'd really like to fix that in as passive a way as possible so as to not break the app that uses the database. I know a constraint with a default can be sued to over ride a null but not when a null is explicitly provided.
I have table Called as ‘DC_BIL_ActivityLog’ and XML column name is ‘ActivityDescription’ in SQL Server 2012.
The following information stored on that Column. I want to read cancellation date (12/23/2015) using select statement.
<ActivityDescription> <text value="PCN was initiated for Policy ^1 on 12/07/2015. Cancellation Date is: 12/23/2015. Amount needed to rescind PCN is: $XX.80." /> <link id="1" linkText="GLXXXP2015 12/02/2015 - 12/02/2016" linkType="policy"> <linkId parm="1" value="1140" /> </link> </ActivityDescription>
Why don't i ever get return value of 1 when the following binary column (profSignature) is null? RETURN SELECT ISNULL(profSignature, profSignature) FROM mpProfiles WHERE ApplicantID = CAST(@CID AS INT) AND ProfileID = CAST(@PID AS INT)
I have a student table like this studentid, schoolID, previousschoolid, gradelevel.
I would like to load this table every day from student system.
During the year, the student could change schoolid, whenever there is a change, I would put current records schoolid to the previous schoolid column, and set the schoolid as the newschoolid from student system.
My question in my merge statement something like below
Merge into student st using (select * from InputStudent ins) on st.id=ins.studentid
When matched then update
set st.schoolid=ins.schoolid , st.previouschoolid= case when (st.schoolid<>ins.schoolid) then st.schoolid else st.previouschoolid end , st.grade_level=ins.grade_level ;
My question is since schoolid is et at the first line of set statement, will the second line still catch what is the previous schoolid?
ProdName Amount TranType P1 100 A P1 100 S P2 200 A P2 205 S
In case the ProdName is same, and Amount = or (within +/- 5%) of Amount, I have to update the TranType column as IN/OUT respectively as shown below in the tables.
I am okay with using 2 different tables if needed as in the records comes in one table and then i can reference that table to upload the values in another.
ProdName Amount TranType P1 100 IN P1 100 OUT P2 200 IN P2 205 OUT
The order of the records coming in can be different order, they need not be subsequent.