SQL Server 2008 :: How To Insert Blog Data Into A Database
Sep 28, 2015
I've never worked with a column that's defined as a binary data type. (In the past if ever we had to include a photo we stored a link to the JPG/BMP/PNG/whatever into a column, but never actually inserted or updated a column with binary data.But now I've got to do that. So how do you put data into a column defined as BINARY(4096)?
I am writing a query to return some production data. Basically i need to insert either 1 or 2 rows into a Table variable based on a decision as to does the production part make 1 or 2 items ( The Raw data does not allow for this it comes from a look up in my database)
I can retrieve all the source data i need easily but when i come to insert it into the table variable i need to insert 1 record if its a single part or 2 records if its a twin part. I know could use a cursor but im sure there has to be an easier way !
Below is the code i have at the moment
declare @startdate as datetime declare @enddate as datetime declare @Line as Integer DECLARE @count INT
set @startdate = '2015-01-01' set @enddate = '2015-01-31'
I work with sql server 2008 on a database.we have export schema and datas with the command export datas
click rigth on database => tasks => generate scripts => select all object => click advanced => select type of data to script => schema and data
Now we have a file with all datas and schema That's perfect ...But how i can insert the file in a other database?ok i can copy paste all datas in management studio and press f5 but when i do this the management studio fail because the size of the file is > 200 mega !
I was wondering if there was a way to redirect an insert to another column...
Example: Original Insert Statement: INSERT INTO [table] ([columnA], [columnB]) SELECT '2015-01-01 00:00:00', 99.99
We have changed [columnB] from a decimal(19,9) to a computed column. So instead, we added another column [ColumnC] to take [ColumnB]'s insert data. I thought we could've used a trigger instead of insert, but that fails with the message "cannot be modified because it is either a computed column or is the result of a UNION operator".
This is the trigger I was using: CREATE TRIGGER [Trigger] ON [table] INSTEAD OF INSERT AS INSERT INTO [table] ([columnA], [columnC]) SELECT [dataA], [dataB] FROM Inserted
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...
Hi. I am currently building a blog application, and I need some advice... I have the post title, post date, post by, etc stored in a database table, however I was wondering whether I should store the actual post content in the database as well or in a text file. For example, if the posts get really long would it slow down database performance or would there not be much of a difference? Furthermore, if I wanted to keep the posts private, a text file would not be ideal as it can be accessed easily by surfers... What do you recommend?Thanks a lot
I have a XML data passed on to the stored proc in the following format, and within the stored proc I am accessing the data of xml using the nodes() method
Here is an example of what i am doing
DECLARE @Participants XML SET @Participants = '<ArrayOfEmployees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Employees EmpID="1" EmpName="abcd" /> <Employees EmpID="2" EmpName="efgh" /> </ArrayOfEmployees >'
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID, Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName FROM @Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
1) I can't get the 'copy database' function to work from SQL Server 2008 to SQL Server 2005. I connect ok. Everything goes to the last step and then it fails.2) I cant get a SQL server 2008 backup to restore on SQL SEerver 2005 either. The only way I know that works is to script the creation of all tables then export and import. This does work. How can I get the 'entire' database, structure and data, from 2008 to 2005? ThanksSQL newbie.
I am task with identifying the source database name, id, and server name for each staging table that I create. I need to add this to a derived column on all staging tables created from merging same tables on different servers together.
When doing a Merge Join, there is no way to identify the source of data so I would like to see if data came from one database more than the other servers or if their are duplicates across servers.
The thing that bugs me about SSIS Data Flow task is there is no way to do an easy Execute SQL Task after I select my ADO.NET Source to get this information because my connection string is dynamic and there is no way of know which data source is being picked up at runtime.
For Example I have Products table on Server 1 and 2:
Server 2 has more Products and would like to join the two together to create a staging table.
I am restoring a database with 10yrs worth of data which have monthly partitions but i would like to keep only 5yrs of data after the restore is done, what is the best/faster approach to delete the 5yrs data without deleting the partitions as that may cause the db in accessible.
I need to recover some data in a table but i'm not 100% sure the right way to do this safely.
I'll need to query the two tables to compare the before and after but how do i go about restoring/attaching the backup database to SQL without causing conflicts?
If I restore, I assume this would just overwrite which is obviously the worst thing that can happen. if i attach the backup, how does this affect the current live DB? how do i make sure that it's not getting accessed and mistaken for the live DB?
What i want to do is .. to find the best way to insert the high speed data( that comes at every 10 ms) to the sql server express database table .
I have two options to store the data in the table . One is to insert all the data in one row , the other is to divide it into many rows in the same table . In approach A i will get like 10 records per second and in approach B i may get i may get 30 - 50 records per second based on the amount of data that is comming . i.e i am creating a new row in the table for every extra column added in Approach A and duplication the other columns.
I wanted to find the better way to insert the data based on the performace metrics like CPU usage and memory usage.
I have two tables having one row identifier column each of int datatype. Both these columns are part of the respective primary keys. Now as a part of my process, i'm inserting one small part of data from one table to another table. This was working fine but suddenly started getting error like
Violation of PRIMARY KEY constraint 'PK_TargetTable'. Cannot insert duplicate key in object 'DW.TargetTable'. The duplicate key value is (58544748).First I checked with DBCC CHECKIDENT with NORESEED and found that there is difference in the current identity value and current column value. I fixed it by running DBCC CHECKIDENT. But to my surprise again got the same issue. interesting thing is that the error comes after inserting 65466 records.
I've got a piece of code that returns 53 records when using just the SELECT section.When I change it to INSERT INTO ..... SELECT it only inserts 39 records into the receiving table.There are no keys/contraints/indices or anything else on the receiving table (it's just a dumping ground for some data that will be processed later).
The code for creating the table is here:- USE [CDSExtractInpatients6.2] GO /****** Object: Table [dbo].[CDS_Inpatients_CDS_Feeds_Import] Script Date: 22/05/2015 15:54:15 ******/ SET ANSI_NULLS ON GO
[code]...
I know most of the date fields are being created as varchar on here, but this is something I inherited and the SELECT is outputting the dates as text.Don't know if it makes any difference, but the server is running SQL2008.
Have any seen Insert statement deadlocking itself ? Most of the articles published by Microsoft says to change the transaction isolation level from Read Committed to Read Committed Snapshot.Below is the XML file on the deadlock
I'm able to successfully import data in a tab-delimited .txt file using the following statement.
BULK INSERT ImportProjectDates FROM "C: mpImportProjectDates.txt" WITH (FIRSTROW=2,FIELDTERMINATOR = ' ', ROWTERMINATOR = '')
However, in order to import the text file, I had to add columns to the text file to match the columns that exist in the table. The original file is an export out of another database and contains all but 5 columns from my db.
How would I control which column BULK INSERT actually imports when working with a .txt file? I've tried using a FORMAT FILE, however I kept getting errors which I tracked down to being a case of not using it with a .txt file.
Yes, I could have the DBA add in the missing columns to the query from the other DB to create the columns, however I'd like to know a little bit more about this overall.
I have to perform a bulk Import on a regular Basis and have created a script to do this. The Problem is that the .csv file has 12 Columns and the table to Import into has 14. To Workaround this discrepancy I have decided to use a Format file. The Problem is that how to create one.
why my insert or into is not working in my SQL Server R2 2008. I have a code I am using on an existing table and trying to put the data in a brand new table but it keeps giving me an error
select mCid, caucasian, aa, api, aian, mr, his, max(val) into memberrand from MEMBERV2_RAND cross apply ( select AA union all select API union all select AIAN union all select MR union all select HIS union all select CAUCASIAN ) v(val)
group by mCid, AA, API, AIAN, MR, HIS, caucasian ;
The error is: Msg 1038, Level 15, State 5, Line 1..An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.Do I have to actually create this table with no values first and then run the query? I was hoping this was sort of a make a table query
What statement do I use, as part of an insert trigger, to insert xml data from the xml database to a flat file database, to check if a record with a specific ID exists to delete first then insert the changed record, instead of adding the changes or an updated from the original xml database.
What I’m trying to do is take the xml formatted data out of one sql server database and insert the data only in that xml into a another sql database. So I can play with the data.
Problem is if the data in the xml is updated or changed for a specific record on the original xml database then the trigger inserts another copy into the created database (which I don’t want).
I am trying to BULK INSERT csv files using a stored procedure in SQL SERVER 2008R2 SP3. Although the files contain several thousand lines and BULK INSERT returns no errors, no data is actually imported into the table. Every field in the table is a NVARCHAR(50) datatype.
Here is the code for the operation (only the parameters for the insert itself):
set @open = 'bulk insert [DWHStaging].[dbo].[Abverkaufsquote] from ''' set @path = 'G:DataStagingDWHStagingSourceAbverkaufsquote' set @params = ''' with (firstrow = 2 , datafiletype = ''widechar'' , fieldterminator = '';'' , rowterminator = '' '' , codepage = ''1252'' , keepnulls);'
The csv file originates from a DB2 database. Using exactly the same code base I can import several other types of CSV files without problem.
The files are stored on the local server with as UCS2 Little Endian and one difference is that the files that do not import do not include a BOM. The other difference is that the failed files are non-UNICODE files.
I am experimenting with using CDC to track user changes in our application database. So far I've done the following:
-- ENABLE CDC ON DV_WRP_TEST USE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO
-- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TEST USE dv_wrp_test
[Code] ....
The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
if exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdate
[Code] ....
When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:
-- GET CDC RESULTS FOR CHANGES TO AVA TABLE USE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO
--RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4 --__$start_lsn__$end_lsn__$seqval__$operation__$update_maskAvaKeyAvaDescAvaArrKeyAvaSAPAppellationID --0x0031E84F000000740008NULL0x0031E84F00000074000230x02119Test26NULL --0x0031E84F000000740008NULL0x0031E84F00000074000240x02119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000310x0F119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000420x0F119Test46NULL
Why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?
If a column is set to allow nulls I know that a constraint can be used to supply a default (i.e. GetDate() ) when no value is provided but what about when an explicit NULL is provided in an INSERT or UPDATE statement?Is there any way other then an AFTER trigger to substitute a value for an explicitly provided NULL? In other words, assuming that dtAsof is a NULL enabled column, is there any way to over ride what the following will do to MYTABLE:
If there's no way to do this in SQL Server 2008R2 then what about later versions of SQL Server? Do any more recent versions have a way to deal with this? We have a third party app that uses a SQL Server back end and many of the tables have columns for storing audit like data such as date/time but many are left to NULL values and I'd really like to fix that in as passive a way as possible so as to not break the app that uses the database. I know a constraint with a default can be sued to over ride a null but not when a null is explicitly provided.
I have table variable in which I am inserting data from sql server database. I have made one of the columns called repaidID a primary key so that a clustered index will be created on the table variable. When I run the stored procedure used to insert the data. I have this error message; Violation of Primary key Constraint. Cannot insert duplicate primary key in object. The value that is causing this error is (128503).
I have queried the repaidid 128503 in the database to see if it is a duplicate but could not find any duplicate. The repaidID is a unique id normally use by my company and does not have duplicates.