insert into dbo.Articles(ArticleID, BlogID, AuthorID, Body)
select
newid(),
BlogID,
UserID,
'Body ' + cast(n as nvarchar)
from (select n, newid() as ArticleID, BlogID, UserID
from @Numbers
cross join dbo.Blogs
cross join dbo.Users) as DummyData
where n <= @articles
In my Body column I get Body 1, Body 2, Body 1, ...
What am I doing wrong?
What should I do to get Body 1, Body 2, Body 3, Body 4, ...
Is there a way to avoid entering column names in the excel template for me to create an excel file froma dynamic excel using openrowset. I have teh following code but it works fien when column names are given ahead of time. If I remove the column names from the template and just to Select * from the table and Select * from sheet1 then it tells me that column names donot match. Server: Msg 213, Level 16, State 5, Line 1Insert Error: Column name or number of supplied values does not match table definition. here is my code... SET @sql1='select * from table1'SET @sql2='select * from table2' IF @File_Name = '' Select @fn = 'C:Test1.xls' ELSE Select @fn = 'C:' + @File_Name + '.xls' -- FileCopy command string formation SELECT @Cmd = 'Copy C:TestTemplate1.xls ' + @fn -- FielCopy command execution through Shell Command EXEC MASTER..XP_CMDSHELL @cmd, NO_OUTPUT -- Mentioning the OLEDB Rpovider and excel destination filename set @provider = 'Microsoft.Jet.OLEDB.4.0' set @ExcelString = 'Excel 8.0;HDR=yes;Database=' + @fn exec('insert into OPENrowset(''' + @provider + ''',''' + @ExcelString + ''',''SELECT * FROM [Sheet1$]'') '+ @sql1 + '') exec('insert into OPENrowset(''' + @provider + ''',''' + @ExcelString + ''',''SELECT * FROM [Sheet2$]'') '+ @sql2 + ' ')
All of my tables in my database have keys that are autonumbered (datatype int with identity set to 1). Whenever I go to insert a new entry into the table I execute an INSERT INTO command and leave off the key field, so that it is automatically inserted with the new row. However, I need to that number, so that I may insert it as foreign key into another table. How would I go about retrieving this number? I thought about doing a Max() on that field, but I am not confident that SQL server would always use a higher number than everything previous. Is there a better way of accomplishing this? Is my design flawed from the start? Any feedback would helpful. Thank you.
I've found out how to to the Insert into my table (col1, col2) Select (col1, col2...) from othertable where regId= @regId in my earlier question but do i have to name every column as i have about 80 in my table. Can't I use an asterisk or something....
SOURCE TABLE ID________COMMENT 123_______I am joe 123_______I am programmer 124_______I am Wang 124_______I am programmer 124_______I like cricket
DESTINATION TABLE
ID_____SEQ______COMMENT 123_____1_______I am joe 123_____2_______I am programmer 124_____1_______I am wang 124_____2_______I am programmer 124_____3_______I like cricket can somebody please advise the easiest way to do this in sql 2000?
I have the following situation; I have one table (tblA) in which a new record just has been inserted. Once this insert is completed successfully, I want to insert a variable number of records into another table (tblB). The primary key of tblA is being used inside tblB as one of the columns in each insert. I’ve already been able to transfer the primary key, generated by the insert for tblA, pretty easy. But to make things a bit more complicated, the variable number of records to add is being decided by the outcome of a query based on an entry inside tblA (after the insert) and this is then being run on another table (tblC). The SELECT statement from tblC combined with the Select parameter from tblA will then decide how many records I have to insert. Sorry for the (perhaps) confusing way of writing this down, but I’ve been struggling with this for a couple of days now and I really need to get it working. Anybody who can help?Thanks in advance,Sunny Guam
How can I allow users to input numbers with commas into a database field with an 'int' datatype without getting this error, 'Input string was not in a correct format'?
MID, IIN and NUM_EVENTS are composite keys. and only NUM_EVENTS get incremented. All records start with NUM_EVENTS = 1.How can I create a query that only displays those records that only NUM_EVENTS = 1 meaning their still on the first stage of processing?
MID, IIN and NUM_EVENTS are composite keys. and only NUM_EVENTS get incremented. All records start with NUM_EVENTS = 1.How can I create a query that only displays those records that only NUM_EVENTS = 1 meaning their still on the first stage of processing?
i am trying to insert an auot number field in my table which has got about million rows but sql 2005 is giving me na error "cant insert" i need to index my table so that the query runs faster when i perform joins on two of such huge tables.. i tryid inserting the identity key the way it was mentioned in the forum but sql doesnt let me do that??
Hello!I have a developer that is playing around with some SQL statementsusing VB.NET. He has a test table in a SQL 2000 database, and he hasabout 2000 generated INSERT statements.When the 2000 INSERT statements are run in SQL query analyzer, all2000 rows are added to the table. When he tries to send the 2000statements to SQL Server through his app., a random number ofstatements do not get executed. But, SQL Profiler shows that each ofthe 2000 statements are getting sent to the server.I suggested that he add a "GO" statement at the end of the INSERTblock, but the statement fails when that is sent to the server.I know that this is not the ideal manner to insert bulk data to thesystem, but now we are all just curious as to why SQL server doesn'texecute each individual INSERT.Any thoughts?
I am able to import a CSV file into a temporary table as long as I know the number of fields in the CSV file. Here is what I would like to do:
I would like to have a CSV file which has UP to 6 entries per row. I would like to insert each row into a table; if the there three fields, then I want to insert them into the first three columns to the temporary table. If there are four, then insert into the first four fields. Is this possible?
My table has a running number primary key. I want to insert data to this table and automatically generate running number pk. I try to write SQL command like this. SELECT MAX(ID) AS MaxId from test INSERT INTO test (ID, data1, data2, data3) VALUES (MaxId+1, @data1, @data2, @data3) but it fails. How should I do? Thank you in advance
Hi, Good morning to all.My table: User_Group_Map(UserID UNIQUEIDENTIFIER,GroupID UNIQUEIDENTIFIER) Now, I want to write one stored procedure that can insert rows into the above table, but more number of rows at-once. Means, the program should allow multiple insertions without the need to call the stored procedure from front-end more number of times. Can anyone please help me on this... Thanks in advance...Ashok kumar.
Dear All, i have a SqlDataSource with a simple select command(e.g. "select a,b,c from foo"). The insert command is a stored procedure and takes less parameters than there are columns in select statement (e.g. "insertFoo(a char(10))"). When used in combination with form view, i get "Procedure or function insertFoo has too many arguments specified" error. It seems that form view always posts all columns as parameter collection (breakpoint in formview_inserting event shows this) to insert command. Am I doing something wrong or is this by design? Is the only solution to manualy tweak parameters in formview_inserting event? TIA Jernej
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[logMsg]( [logMsgID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [msg] [nvarchar](256) COLLATE Latin1_General_CI_AS NOT NULL, [AppId] [int] NULL, CONSTRAINT [PK_logMsg] PRIMARY KEY CLUSTERED ( [logMsgID] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY]
and trying to insert values with
INSERT INTO [ProxyDB].[dbo].[logMsg] ([msg] ,[AppId]) VALUES ('Text Test',1)
Getting error message:
Msg 213, Level 16, State 1, Procedure TrgInslogMsg, Line 14 Insert Error: Column name or number of supplied values does not match table definition.
Hi,I have a field: usercode [tinyint]In Query Analyzer:UPDATE tblUserProcessSET usercode = 1002Result: Error "Arithmetic overflow error for data type tinyint, value = 1002.The statement has been terminated."In VBA/Access ( linked to SQL Server ):intOptions = 512pstrQuerySQL = "UPDATE ..."CurrentDb.Execute pstrQuerySQL, intOptionsResult: no errors, insert value 223 (???)Why?Thanks, Eugene
I have a table with PO#,Days_to_travel, and Days_warehouse fields. I take the distinct Days_in_warehouse values in the table and insert them into a temp table. I want a script that will insert all of the values in the Days_in_warehouse field from the temp table into the Days_in_warehouse_batch row in table 1 by PO# duplicating the PO records until all of the POs have a record per distinct value.
Example:
Temp table: (Contains only one field with all distinct values in table 1) Days_in_warehouse 20 30 40
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing - page life expectance becomes "terrible" - free list stall/sec increases - lazy writes/sec increases - readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine - the table has a clustered index on a identity column - there are no foreign key constraints - inserts are executed using a loop, not one big transaction - to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
I get this error when I look at the state of my SQLresults object. Have I coded something wrong?Item = In order to evaluate an indexed property, the property must be qualified and the arguments must be explicitly supplied by the user. conn.Open() Dim strSql As String
I have a table with PO#,Days_to_travel, and Days_warehouse fields. I take the distinct Days_in_warehouse values in the table and insert them into a temp table. I want a script that will insert all of the values in the Days_in_warehouse field from the temp table into the Days_in_warehouse_batch row in table 1 by PO# duplicating the PO records until all of the POs have a record per distinct value.
Example:
Temp table: (Contains only one field with all distinct values in table 1)
I try to import data with bulk insert. Here is my table:
CREATE TABLE [data].[example]( col1 [varchar](10) NOT NULL, col2 [datetime] NOT NULL, col3 [date] NOT NULL, col4 [varchar](6) NOT NULL, col5 [varchar](3) NOT NULL,
The first column should store double (in col2 and col3) in my table
My file: Col1,Col2,Col3,Col4,Col5,Col6,Col7 2015-04-30@|@MDDS@|@ADP@|@EUR@|@185.630624@|@2015-04-30@|@MDDS 2015-04-30@|@MDDS@|@AED@|@EUR@|@4.107276@|@2015-04-30@|@MDDS
My command: bulk insert data.example from 'R:epoolexample.csv' WITH(FORMATFILE = 'R:cfgexample.fmt' , FIRSTROW = 2)
Get error: Msg 4823, Level 16, State 1, Line 2 Cannot bulk load. Invalid column number in the format file "R:cfgexample.fmt".
I changed some things as: used ";" and "," as column delimiter changed file type from UNIX to DOS and adjusted the format file with " " for row delimiter
Removed this line from format file 1 SQLCHAR 0 10 "@|@" 2 Col2 "" Nothing works ....
How to insert a row number for a zone wise(ie group by zone column) in ssrs report in zone column i should get zone1 only once (should not get Zone1,zone1, zone1 -3 times)
sl.no Zone District no.of.region
1 hyd 24 2 ZONE1 chn 12 3 bang 2 1 raj 4 2 ZONE2 vizag 3 3 bbb 34
I have a table in which a non-primary key column has a unique index on it. If I am inserting a record into this table with a duplicate column value for the indexed column, then what will be the error number of the error in above scenario? OR How could I find this out?
I have created a local user on Report Server Computer and the user has the administrative rights. When i try to connect Report Server (http://xxx.xxx.xxx.xxx/reportserver) with this user's credantials. (ReportServer directory security is set -only- to Basic Authentication. ). I get the following error.
The number of requests for "XXXServerXXXUser" has exceeded the maximum number allowed for a single user. -------------------------------------------------------------------------------- SQL Server Reporting Services
Then i try to login using a different user with administrative rights on the machine, i can logon successfully. The system is up for a month but this problem occured today?!? What could be the problem?!?
declare @NumberToCompareTo int set @NumberToCompareTo = 8 declare @table table ( number int ) insert into @table select 4
[Code] ....
The query selects 4 and 5 of course. Now what I'm looking for is to retrieve the number less or equal to @NumberToCompareTo, I mean the most immediate less number than the parameter. So in this case 5
in my sql, i want to change a decimal number to percent format number, just so it is convenient for users. for example there is a decimal number 0.98, i want to change it to 98%, how can i complete it?
I am currently designing a SSIS package to integrate data into a data warehouse fact table. This fact table has about 70 columns among which 17 are foreign keys for dimension tables.
To insert data in that table, I have to make several transformations and lookups. Given the fact that the lookups I have to make are a little complicated, I have about 70 tasks in my Data Flow. I know it's a lot, but I can't find a way to make it simpler. It seems I really need all these tasks.
Now, the problem is that every new action I try to make on the package takes a lot of time. At design time, everything is very slow. My processor is eavily loaded each time I change a single setting in one of the tasks, and executing the package in debug mode takes for ages. If I take a look at the size of my package file on disk, it's more than 3MB.
Hence my question : Are there any limitations in terms of number of columns or number of tasks that can be processed within a Data Flow ?
If not, then do you have any idea why it's so slow ?
I have a large table of customers. I would like to add a column that contains an integer, unique to that customer. The trick is that this file contains many duplicate customers, so I want the duplicates to all have the same number between them.the numbers dont have to be sequential or anything, just like customers having the same one.