I import MS Excel 2003 spread sheet in MS SQL Server 2000 through MS SQL Server 2000 Enterprise Manager (rightclick on the table to be filled with dataall taskimport data). My excel file have 2000 rows and 100 columns of data. All the data are imported in relevant attributes cells in good manner. But the rows are sorted automatically. I am trying to say that first row data is match with my excel file. But second row data have gone to 7th row and 7th row have gone to 5th row like that. I need the data sequence what I have in my excel file. What is the problem occurred? How can I solve this?
Can I export my MS Excel 2003 file to MS SQL Server database?
Please help me. I don't have more knowledge in MS SQL Server 2000.
If your answer has any query to run then please mention where should I run that query.
Thanks,
i have a table and a column called req_id, i have it set as the primary key.. so if i just do SELECT * FROM table, shouldnt the rows returned be sorted by the order that the rows were inserted?
this database was improted from an access database.. when i did that in access it would return the rows in sorted order by the order the row was inserted.. but now in MS SQL, its not sorted in that order.. i can't really tell what type of order it's in
I know 2008 MS SQL Server has a timestamp data type that adds date and time when the rows are inserted. Is there a way to automatically update the date and time when the rows are updated?
I have some relatively simple SQL that acts differently between SQL Server 2000 and 2005. Although it is easy to fix I'd like to know if this difference is expected (documented) or a bug, and if there is perhaps a setting/switch I can use to avoid a code review of hundreds of stored procs to look for similar scenarios. Executed the following script in SQL Server 2005 –CREATE TABLE #Floats(FloatID INT IDENTITY,FloatNumber FLOAT NOT NULL)DECLARE @sngCounter floatSET @sngCounter = 20WHILE @sngCounter >= 0BEGININSERT INTO #Floats ( FloatNumber ) VALUES( @sngCounter )SET @sngCounter = @sngCounter - 1ENDSELECT * FROM (SELECT TOP 100 PERCENT * FROM #Floats ORDER BY FloatNumber) AS FloatNumbersDROP TABLE #FloatsGOProduces the following resultset –FloatID FloatNumber----------- -----------1 202 193 184 175 166 157 148 139 1210 1111 1012 913 814 715 616 517 418 319 220 121 0In SQL Server 2000 resultset is this –FloatID FloatNumber----------- --------21 0.020 1.019 2.018 3.017 4.016 5.015 6.014 7.013 8.012 9.011 10.010 11.09 12.08 13.07 14.06 15.05 16.04 17.03 18.02 19.01 20.0
Is it possible to have a report automatically export the results to Excel, when the report is generated? ...rather than having to select the format and click Export? .Pl let me know-Thanks.
Hallo!I have small or big problem.I want creat Store Procedurs whit limit rows.For example I need rows from 100 to 200.--Select * from tableHow I doing best way.
I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!
I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.
At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.
98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.
This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......
The server is a 1100 MHz P3 / 512MB / Windows 2000 Server / SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.
My log is pretty simple, as follows:
LogId - int - primary key - clustered index ClientId - int - index asc LogTypeId - int - index asc LogValue - nvarchar[2500], ikke index LogTimeStamp- datetime - index asc
I have deducted 3 different solutions:
Method 1: Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".
This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?
Method 2: Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.
Method 3:
Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)
But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.
However, it seems as this method is what will work best for my set-up, unless there are other suggestions??
Method 4:
...eagerly awaiting ideas!!! :-)
(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )
Hi guys, The result of my query I am not able to get it sorted by months.Here is my query & the output.
declare @tbl as table(date datetime,amt int) insert into @tbl select '2-jan-2008',100 union all select '15-jan -2008',200 union all select '20-jan -2008',500 union all select '12-jan-2008',300 union all select '02-feb-2008',100 union all select '09-feb-2008',250 union all select '03-mar-2008',500 union all select '05-mar-2008',800
select Months,SumAmt,MaxAmt,MinAmt from ( select datename(mm,date)as Months,sum(amt)as SumAmt,min(amt)as MinAmt,max(amt)as MaxAmt,row_number() over(partition by datename(mm,date) order by datename(mm,date))as rowid from @tbl group by datename(mm,date) )t
The result is
Months SumAmt MaxAmt MinAmt ------------------------------ ----------- ----------- ----------- February 350 250 100 January 1100 500 100 March 1300 800 500
I want to get it sorted month wise i.e Jan,Feb,March.But I cannot find a way
If a component requires a sorted input it would seem reasonable that you can check the IsSorted property of the attached input, but this will always return false. I have tried this when connecting the output of the Sort transform to my component, and then check the IsSorted property for this input. It is always false. How can this be, and also how can I see if the path is indeed sorted?
If using a virtual input column in my UI, I get a SortKeyPosition on the columns, but when overriding SetUSageType in the component class I always get zero for the key. Why is the sort information not quite there for me?
Key, Name, Address, City, State, Zip ................ect
I would like to keep this table sorted by Name, theirfore I won't have to sort my results with every querry.
I think I need to add something to my insert to tell my table - "Hay take Jones", open up the prober place and stick him in the proper spot.
Ex: We have Appleby and Robertson in our table now. My insert would tell SQL Server to take Jones, figure our where he belongs (alpha), and stick him in, resulting in.
Appleby Jones Robertson
This way I wont have to as the querry to sort stuff every time I reference this table, this will save lots and lots of overhead. and help keep my clients happy with quick(er) response.
I have four tables that need to be loaded into an ASP.NET application. They need to be loaded together into one result set and sorted. Is it possible to load four tables together and sort them using an SQL statement?
I am aware of and am using the SELECT ... ORDER BY feature of MSSQL in my present ASP.NET application to retrieve from single database tables. I'm using merged datasets and a sort method to solve the above problem at the moment.
I have the following table in a database of SQL Server 2005 Express.
ColA ColB ----- ----- A 12 A 10 B 50 B 13
What I want to achieve is the following result :
Col A Aggr ----- ------ A 2,12 B 13, 50
I tried the following query :
SELECT ColA, dbo.Concatenate(ColB) as Aggr FROM (SELECT TOP(100) PERCENT ColA, ColB FROM TABLE_A ORDER BY ColB) AS Derived_A GROUP BY ColA
where Concatenate is the CLR User-defined aggreate function written in C#.
What I want to do is to first sort the rows on ColB and then to execute aggregate function over the sorted rows so as to get the above-mentioned results. However, what I got is the following
Col A Aggr
----- ------
A 2,12
B 50,13 -- not sorted
I learnt from the SQL Server documentation that even though the ORDER BY clause is presented in the subquery, the query result is not guaranteed to be sorted. Only ORDER BY clause used in the outer query should work. So, how should this problem be solved? One way of which I'm thinking is to do the sorting in the aggregate function, but I'm worrying if this is harmful to the performance. Could anyone help me to solve the problem?
In the database I am querying, there is a field called "group". I can find out when "group" changes with a time field that was logged whenever group changes. I am trying to find what the value was changed into.
Code: SELECT Assignment_Log.DBID, Assignment_Log.Assigned_Group, Assignment_Log.Submit_Date FROM Assignment_Log
This will tell me:
The unique ID of the database The group it was assigned FROM The time the assignment took place
I need to find the assignment TO
so what I need to do for the last column is something along the lines of...
Find the next record after the current record by looking at the next Submit_Date and view the Assigned_Group.
If no "next record" can be found, leave the cell empty.
Is there a way to join tables that have multiple matches to each other (2 records in one table and 2 in another) so that you get 2 records returned instead of 4 with only 1 JOIN ON qualifier?
In our warehouse DB, there is a master location table, an inventory location table, and physical table for counting all product in the warehouse. The master location table has one record per location, but there could be multiple items in that location so my outer join from the master location to the inventory table returns something like:
select M.MASTER_LOC, C.AS ORIG_ITEM, C.ORIG_LOT, C.ORIG_QTY from M_LOC M LEFT OUTER JOIN C_INVT C ON M.MASTER_LOC = C.INVT_LOC order by M.LOC_CODE
My join is returning 4 rows for location 01-03-A which I understand, but I'm wondering if I can sort within the join or make some temp tables so that instead of:
select M.MASTER_LOC, C.AS ORIG_ITEM, C.ORIG_LOT, C.ORIG_QTY, E.AS CNTD_ITEM, E.CNTD_LOT, E.CNTD_QTY from M_LOC M LEFT OUTER JOIN C_INVT C ON M.MASTER_LOC = C.INVT_LOC LEFT OUTER JOIN E_PHYS_INVT E ON M.MASTER_LOC = E.LOC_CODE order by M.LOC_CODE
HiBeen at this for 2 days now.Each business has several packages which they can sort usingsort_order.I'm trying to get one package for each business(that I can do), howeverI want it to be the one with the lowest sort_order valueAs you can see below the first record has sort_order=5 when it shouldbe 1.Most of the sort_order columns will be zero by defaultAny help so i can get on with my life!CheersGary------------Current select-------------------SELECT *FROM dbo.testAccommodation_Packages T1WHERE (NOT EXISTS(SELECT *FROM testAccommodation_PackagesWHERE business_id = T1.business_id AND Package_ID < T1.Package_ID))--------------results:-----------------------Package_IDbusiness_iditem_namesort_order123rd Night FREE ...5113Donegal Town ... 0204Executive ...0--------------To recreate----------------------CREATE TABLE [testAccommodation_Packages] ([Package_ID] [int] IDENTITY (1, 1) NOT NULL ,[business_id] [int] NULL ,[Item_Name] [nvarchar] (300) NOT NULL ,[sort_order] [int] NULL CONSTRAINT[DF_Accommodation_Packages_sort_order] DEFAULT (0),)-------------------------------------------------INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('1','2','3rd Night FREE when you stay 2 nights MIDWEEK (129 EuroPPS)','5')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('2','2','Selected Donegal Town Hotel Weekend Sale - 2 B&B and 1Dinner Only € 129 PPS','4')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('3','2','2 Night Specials -Jan, Feb & Mar 2 B&B and 1 Dinner 149Euro PPS','3')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('4','2','Easter Hotel Breaks in Donegal Town - 2 B&B + 1 D€169pps','2')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('5','2','2005 Bluestack Hillwalking, 2 nights B&B, 1 Dinner, 5course Lunch 159 Euros PPS (~109 Stg)','1')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('6','2','April Pamper Package - 2 Night Special ONLY€195pps','10')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('7','2','Discount Hotel Prices for 8th & 9th April Only € 119PPS','7')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('8','2','Golden Year Breaks in Donegal - 4B&B + 2 Dinner€229pps','8')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('9','2','Hotel Summer Breaks Sale in Donegal - 2B&B + 1 Dinner€169pps','9')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('10','2','STAY SUNDAY NIGHTS FOR €25PPS','6')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('11','3','Donegal Town Midweek Special 99 Euro PPS 3 Nights B&B','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('12','3','Bridge Weekend 2 nights B&B 79 Euro PPS (approx 55Stg) Double Room','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('13','3','Donegal Spring Weekend Specials 2 B&B 1 Dinner109.00euros pps','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('14','3','Valentines Weekend 2 nights B&B and 1 four coursegourmet dinner 99Euro PPS','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('19','3','Golden Years Break.40% OFF 4 nights B&B€129.00p.p.s.','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('20','4','Executive Celebration Offer 1 night B&B + Dinner €139 PPS','0')INSERTtestAccommodation_Packages(Package_ID,business_id, Item_Name,sort_order)VALUES('21','4','Watercolour Painting Break 3 B&B Full Board andTuition € 335 PPS','0')
I have a matrix report that shows a certain columns and rows. I need a row counter to show me the amount of rows in that matrix but it doesn't count properly by using the expression: =Int(RowNumber("matrix1") -1) I actually don't have any idea on what this expression should be to fix the row count...to illustrate what I get in the report:
As you can see above, the row count numbers are not lining up at all, it starts at 1, then jumps 2 every count then all of a sudden jumps 5 per line...a pattern that is interesting but difficult to understand. The unique value in this matrix is that serial number but even though i group it by the serial number, it still gives me the funny Row count values.
So if an item is a bottle, it will have a unique BottleKey and Null for CaseKey and if an item is a Case, it will have a unique CaseKey and Null for BottleKey. However, the PrimaryIDs are the same for both the bottle and case. I get the results to look like this:
I'm doing a data conversion with one of my fields (SUMDWK) from one of the tables that will be used in a merge join. With the new, converted field, I do a look up. From this look up, I want to take a new field FiscalWeekOfYear, and replace the original field, SUMDWK. This is necessary because SUMDWK is one of the sorted fields. In the look up, it is not possible to change the Output Alias. Does anybody know a way around this? Thanks.
i ran a preview of a matrix based report whose column headers are dates. The dates seem to be displaying in a somewhat (not completely) random order from left to right. How can I ensure that they display chronologically from left to right?
@RemoteQuery consists of a SELECT four-table join, all tables are on the same linked server.
The Linked server has been set up on MyLocalServer using the "Microsoft OLE DB for SQL Server" provider. In the "Provider Options" for the linked server properties I checked "Non transacted updates" and "dynamic parameters". In the "Server Options" tab I have checked "RPC", "RPC Out", "Data Access".
The EXECUTE part of the query runs great (and returns the data very fast) by itself. But with the INSERT part, the query fails and returns the error:
"Server: Msg 7391, Level 16, State 1, Line 17 The operation could not be performed because the OLE DB provider 'SQLOLEDB' was unable to begin a distributed transaction. [OLE/DB provider returned message: New transaction cannot enlist in the specified transaction coordinator. ] OLE DB error trace [OLE/DB Provider 'SQLOLEDB' ITransactionJoin::JoinTransaction returned 0x8004d00a]."
The two servers are seperated by firewalls, so I believe the reason the query is failing is that I haven't followed the procedures for setting up the ports etc described in one of the microsoft support articles: e.g.: 250367.
Configuring the ports involves too much company politics, and besides, for what this query does, it does not need the benefits of a distributed transaction.
How can I execute my query without SQL Server automatically trying to upgrade it to a distributed transaction?
More Info: I can execute the query as a straight INSERT/SELECT linked-server query and it does the INSERT on the local SQL Server just like I want it to, so I assume it is not trying to use distributed transactions; but it takes around 7 seconds to run even though the entire SELECT is executed on the linked server, whereas executing with sp_executesql takes only 1 second.
I thought selected "non-transacted updates" in the provider would solve this problem, but it did not.
I'm looking for a way to get the name of the server on which the DTS package lives.
I copy packages between servers. The problem is that everytime a package is copied to different server, I have to change the reference in the connection strings to point to the new server name. I'd like to find an automatic way to interrogate the server name where the package currently lives and dynamically change connection strings from within an ActiveX task. That would cut maintenance way down.
I installed SQL Sp2 and at the same time took Carbon Copy off. No when the machine comes up neither SQL Server or the Agent will not come up. If I go to Service manager I can start both just fine. Nothing in either log to show why. Anyplace or any thoughts