I'm required to allow users to order items in a field to be displayed on a page in the order they specified.
For example: A user can drag and drop items in a list to specify the order it will be displayed in.
I have my drag and drop code ready to do this.
I have an idea on how to do this but I think it’s too inefficient.
I was going to create an orderby field and populate it with a number that corresponds to the position of the item. However, as one can deduce, if a user drags and drops a record between two others, I would have to change not only its orderby number but then change all the other items orderby number.
For instances if I dropped an item with an orderby number of 3 between 6 and 7 I would have to change the 3 to a 7 and then recursively change all the other records orderby numbers up to 3 and then change everything after 7.
Well, I hope I make sense. It’s easier to visualize it on paper.
Does anyone know how to tackle this issue of user dynamic ordering?
declare @table table ( ParentID INT, ChildID INT, Value float ) INSERT INTO @table SELECT 1,1,1.2
[code]....
This case ParentID - Child 1 ,1 & 2,2 and 3,3 records are called as parent where as null , 1 is child whoose parent is 1 similarly null,2 records are child whoose parent is 2 , .....
Now my requirement is to display parent records with value ascending and display next child records to the corresponding parent and parent records are sorted ascending
Ive got a table with a field called 'morder' which orders a menu based on the values in this field (1 for position 1, 2 for position 2 etc...) for example
1 - menu item 1 2 - menu item 2 3 - menu item 3 4 - menu item 4
If I want to add a record to this table and put it at number 2 in the list, i need to update the table to then read...
1 - menu item 1 2 - NEW menu item 2 3 - menu item 2 4 - menu item 3 5 - menu item 4
I want to use a mssql or php function to re-order this field... is it possible??
I insert records in a particular order.After all the records are inserted and when I try to retrieve the records, all of them are sorted in a ascending order.Is there any way to prevent them from being sorted.
SELECT Loan.loan_No AS Loan_No, Loan.customer_No AS Customer_No, Customer.first_name AS First_name, Customer.second_name AS Second_name, Customer.surname AS Surname, Customer.initials AS Initials, Bank.Bank_name AS Bank_name, Branch.Branch_name AS Branch_name, Branch.branch_code AS Branch_code , Bank_detail.bank_acc_type AS Bank_acc_type, Transaction_Record.transaction_Amount AS Transaction_Amount, Transaction_Record.transaction_Date AS Transaction_Date, Loan.product AS Product, Product.product_Type AS Product_Type, Product_Type.loan_Type AS Loan_Type
FROM Transaction_Record INNER JOIN Loan ON Transaction_Record.loan_No = Loan.loan_No INNER JOIN Product ON Loan.product = Product.product INNER JOIN Customer ON Loan.customer_No = Customer.customer_no INNER JOIN Bank_detail ON Customer.customer_no = Bank_detail.customer_no INNER JOIN Branch ON Bank_detail.Branch = Branch.Branch INNER JOIN Bank ON Branch.Bank = Bank.Bank INNER JOIN Product_Type ON Product.product_Type = Product_Type.product_Type
Is it possible to use wildcards with an equals statement? Such asSELECT * FROM Table WHERE City = '%' AND State='Ca'Bascially just stating where city equals anything...I know you can do it with a LIKE statement such as...SELECT * FROM Table WHERE City LIKE '%' AND State='Ca'but is that very efficient?The reason I want to do this is because I want to programmitcally set the city, so just ommiting it won't work Also, using City LIKE '%' seems to not include NULL...is there anywayto include NULL as well as anything else? Thanks for your help!
SELECT CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END AS Population_Range, COUNT(CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END) AS [No. Of Countries] FROM Country GROUP BY CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END
My main table has the following structure:t1 (id_primary, id_secundary, name) i.e. [(1,1,"name1"), (2,1,"name2")]I want to join this table with the following second table:t2 (id_primary, id_secundary, value) i.e. [(1, NULL, "value1"),(NULL,1,"value2")]The join should first try to find a match on id_primary and only if thatfails it should find a match on id_secundary. Every row in t1 is matchedagainst a single row in t2.The following query works:selecta.name, isnull(b.value, c.value)fromt1 a left outer join t2 b on a.id_primary = b.id_primaryleft outer join t2 c on a.id_secundary = c.id_secundaryI'm wondering though if it would be possible to write a query that only usest2 once, since it actualy is quite a complex query that is calculated twicenow. Any ideas (besides using a temp table)?
I have an ASP.NET web application that uses a Treeview control to display what can potentially be a very large data set. In the past, I would just run a recursive stored procedure in my database that would output the XML which I would save to a file. The Treeview used the XML file as its data source. I did this because it can take so long for the stored procedure to run (10 seconds or more) that, it isn't practical to have the treeview point directly to the stored procedure. This worked well enough because the data didn't change very often.
Now, it looks as if the application will be used in a production environment, and I really need to find a way to supply up-to-date data to the treeview in a dynamic way. I have tried creating a view that would provide XML and that would be updated any time the target table is updated but, that has not worked. I have also tried creating a trigger that would output to an XML file any time an edit was made (using the xp_cmdshell functionality) but, that has proven difficult as well.
Is there a simpler solution that I am just missing? I just want an up-to-date XML representation of the data that is a result of a recursive function.
We have a website that accesses our SQL databases. In the past, we used our internal employees to improve our SQL databases. However, we want to outsource the work.
There is a lot of information we would like to keep private from the outsourcing.
Is there a way to efficiently make test data throughout our database without changing our original database? Is a way to easily update our database to the new changes?
I found this product through Google EMS Data Generator for SQL Server http://www.sqlmanager.net/en/products/mssql/datagenerator Would this program help us make test data?
Hi,I'd be interested in people's thoughts about the following. A user on my site will be searching for a venue name, and that could officially include a sponsor which the user might not search for. Now I am using the AutoCompleteDropdown from the AJAX Control Toolkit, so the user will start typing in a few characters and the results will be returned. I can generate the results from sql by doing a simple LIKE '%' + @searchTerm + '%' however, this fills me with great fear of table scans. At the moment, we'd be querying against a table of 5K records, but our application is very new.I'm thinking one option is to split the words into another table - a one to many relationship to hold each word of the venue. The benefit of this would be that you could do a:LIKE @term + '%'but then I have the cost of the join. (And the added complexity which is not a major issue)Any thoughts/tips?Thanks!
We have a SQL Server 2012 Enterprise live transactional database that is now growing over 1G per month and is becoming a size problem for us. It is currently at 23G. Character type fields are all Unicode and I have calculated a savings of 5G in space converting only 2 such fields averaging 206 characters each to non-Unicode, and almost 10G in space if we convert a few more of them from nchar and nvarchar to char and varchar types. These fields will never have a requirement to hold Unicode characters that cannot be in the SQL_Latin1_General_CP1_CI_AS collation as they come in as plain ASCII originally and will always do so per the protocol standard.
I’m the software architect and chief C# developer though only a DBA hack or I would not have designed our database to have Unicode fields for high volume tables that did not need Unicode for those fields when the database was created 3 years ago. I want to correct this mistake now before we finalize converting to an AlwaysOn environment to support with various performance and backup issues.
After downsizing these two or more fields, we would like to shrink the database one time to take advantage of the space savings for full backups, and for seeding an AlwaysOn environment.
1.What is the safest and most efficient conversion technique for downsizing columns from nchar/nvarchar to char/varchar types? Esp. when there are multiple fields in the same table to be converted. I tested doing an “add new column, set new=old, drop old, rename old to new” for both of the main two fields I want to convert from nvarchar(max) to varchar(max), and it took 81 minutes on our test server (4 virtual core, 8G memory) before running out of disk space even though there was 8G left on the disk, and the db has unlimited size set (Could not allocate space for object 'dbo.abc'.'PK_xyz' in database 'xxx' because the 'PRIMARY' filegroup is full). I did delete an old database before it finished after getting a disk warning so maybe it did not count that new space. Regardless it was too slow. And this was on just the two largest of these columns (12.6M rows) and only ran 2 to 3% CPU busy so seemed not very efficient, and indicated unacceptable downtime if we were to convert even these two fields much less any additional fields. Average field size for these two fields was only 206 characters or 412 bytes each. Another technique I plan to try is to create the new table def in a new schema, select into it from the old table, then move tables amongst schema and delete the old table. I have a FK and indexes to contend with on the table.
2.If I figure out how to do #1 efficiently within an acceptable maint window, what is the safest practice for doing a one-time shrink and end up with organized/rebuilt indexes and updated Statistics? I understand the logic of not doing regular shrinks and that sometimes it can actually increase the size.
3.Is there any third party tool that could take a backup and restore it into a new database with the modified field definitions or otherwise convert certain field types?
Situation: SQL Server 2000. At my new employer they have a production database on one server and a copy of it that is set to read only on another server which is used for reporting.
#1 They have an SQL Server Agent job on the production server that: (2 times a day)
Backs up the production database Copies the backup file to a directory on the reporting server. (Its pretty big and can take time if there are problems with the LAN)
#2 They have an SQL Server Agent job on the Reporting server that: (scheduled to run 2 hours or so after the job on server 1 has runthey figured that it would be a safe bet that the backup and copy process of the first job would be done by then)
Breaks the user connections to the reporting database Performs a restore on the reporting database using the backup file that was copied to the holding directory by the production job. Sets some permissions for various users. Sets the reporting database to READ ONLY. What I would like to do is find a more efficient way to create this reporting database, I have started doing research into DTS methods but would like some opinions from more experienced users.
We are experiencing a problem using order clause in the statement below:
SELECT person.pkey , first_name FROM person , private_person WHERE fkey_person = person.pkey AND first_name = 'A' ORDER BY first_name , person.pkey
Where person table contains the following columns:
pkey, name etc
and private_person
pkey, fkey_person (indexed), first_name (indexed)
The output is:
STEP 1 The type of query is SELECT FROM TABLE private_person Nested iteration Index : i$private_person$first_name FROM TABLE person Nested iteration Using Clustered Index pkey first_name ----------- --------------------- 2000512 A 10994 A 2299 A 1097 A 1218 A 5133 A 1329 A 1387 A 1465 A 7532 A 5513 A 1884 A 512 A 591 A
(14 row(s) affected)
STEP 1 The type of query is SETOFF
Why SQL server does not order by pkey? Is there any information about it somewhere? Is it a bug or what?
If we are ordering by fkey_person everything is Ok. Can anybody help?
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
I have made sql table called costreallocation consists of two columsn: RuleID varchar(10) Type varchar(20) I run the following three queries: insert into costreallocation(RuleId,Type)values('aa','a') insert into costreallocation(RuleId,Type)values('dd','d') insert into costreallocation(RuleId,Type)values('cc','a') then when i notice that these three records are not inserted as they are run: In the table thn there are three records : aa a cc d dd d but i need to be filled in the table as they run as: aa a dd d cc d
I have an int field - that will hold number for an ordering system. It's a databound field to a textbox in a grid, I want the default to be empty, however not all fields are required so i may have 1-6 as bound values, and the rest should fall in line behind them - the only thing i can think of to do is default the values to 9999 so they come after 1,2,3,4,5,6 etc... I don't like seeing 9999 in my bound fields. thanks for any suggestions you may have. Jeff
As you can see it's a many to many relationship. Is there a way to order the books by Authors.Lastname (the first if there are several authors) with one query ? If not, what is the fastest way ? I don't need the names in the resultset so I only want to order the books by Authors.Lastname, not include Authors.Lastname.
Hiya! I've got a little stored procedure: CREATE PROCEDURE rICDCode @param1 char(15) = 'Description' AS SELECT DiagCode, ICD9ID, Description FROM ICDCodes ORDER BY @param1 and SQL is returning error 1008 - "The SELECT item identified by the ORDER BY number %d contains a variable as part of the expression identifying a column position. Variables are only allowed when ordering by an expression referencing a column name." I want to be able to use the same stored procedure for several different functions which all need the same rowset sorted differently. Any way to do this?
I want to create a top 20 product list from a few thousand products. I want the rest of the products to be grouped into 'others'...
I also want the products to be ordered by the facts in the cube. Thus the product dimension would dynamically change depending on the Time dimension thats being selected.
Is it possible without using CASE statements, to order a result set by one field (if the second field return value is null) or another (if the second field return value is not null)
This would be on datetime fields. Let's say I have 5 records with 2 date fields sched_dt and arriv_dt.
sched_dt will always have a value, arriv_dt may or may not have a value.
In a single result set I would like to have the records with an arriv_dt sorted by arriv_dt and the ones without an arriv_dt sorted by sched_dt.
I have a table that has 4 dates for each record (birthday, anniversary, policy_start & policy_end). What I want to do is cycle through each record and then do some calculations using the gaps between the dates. So it might be Calc1 using (birthday->anniversary), Calc2(anniverary->policy_start), Calc3(policy_start->policy_end), etc. BUT the dates might be in different orders (ie the birthday could come first - or the anniverary could, etc).
In VBA in Excel, I can use the SMALL function within my calcs and this returns the minimum date, the next smallest, the next & then the biggest. But how can I do this in SQL. I guess I can push the 4 dates out to a temp table, order it and then suck them back in - but I have no idea how to do this for all the records in my primary table
I've also tried using various WHERE statements (ie do Calc1 using Birthday & Anniversary WHERE Birthday < Anniversary and ........ - but the code gets awfully long.
Is it possible to write my own SMALL function which uses a bubble sort on the 4 dates & then returns the smallest, next smallest, etc
Hi, I was wondering, I have several columns of data that are able to be sorted on - the user just clicks on the column name. However, we have 1 column called order status where we don't want the order to be alphabetical (and it is a text field - a couple current possible values are Processing...In Production...Waiting For Approval). This order is not good for logical sorting - we would want to be able to specify what value would get order. Does that make sense? So we would want to be able to say that Processing is first in the display, Waiting For Approval would be next, then In Production would be next, and so on. Is there any possible way of doing this in a SQL query? Or am I going to have to manually modify all the data adding numbers to the beginning of each data so that it will sort on the numbers? Thanks everyone.
i add new column using alter command but i always found it in the end of table but i want to add it in particular position between the columns...........
Hi,I'm streaming data monitoring histories of components into a database table with the following three columns: Hour (DateTime), Id (Int64), and Value (Int32). The Value entry is an aggregate of all values sent from component Id during, and 59 minutes and 59 seconds after, the time listed in the Hour column.I had rather not have to sort the queries after pulling them from the database by date. So I tried to index the DB by the Hour column. Any column will inevitably have duplicates, since the uniqueness depends on a combination of Hour and PortId. But Indexing the Hour column doesn't result in INSERTs being in order as I had expected. Instead, every entry is listed in order of insertion.So. . .how can I keep such a table ordered by date on the disk? I'm afraid this will become very inefficient if this isn't nipped in the bud right now.Thanks so much for your help!-Brandon
I am wondering if someone maybe able to help me, I am needing to order my data via months in the calendar sense not alphabetically, below is what I currently have, but it only does it alphabetically.
select to_char(created,'yyyy-Mon'), matdesc, count(*) from test group by to_char(created,'yyyy-Mon'), matdesc order by to_char(created,'yyyy-Mon') desc
I am trying to do a simple SQL query..the result will be sorted alphabetically by default, but i need to specify say, if there is a "Others" it will be placed last in the list. Is there any way to write the query using ORDER BY?
I also have the following query (again ive stripped out the non-relevant fields)
SELECT TOP (100) PERCENT SUM(dbo.RECORD.Stock_Held_Number) AS TotalStock, CATEGORY.Name AS FundName FROM CATEGORY LEFT OUTER JOIN dbo.RECORD ON CATEGORY.ID = dbo.RECORD.CATEGORY_id GROUP BY CATEGORY.Name, RECORD.Stock_Held_Number
At the moment its grouping all the CATEGORIES and giving me a sum total for each which is great.
The problem I have is the CATEGORY table, there is an optional join to parent CATEGORY records on the table.
What Im trying to do is to provide a fully ordered result within the CATEGORIES and I don't have a clue.
For example:
The CATEGORY table has the following values ID Name Parent_Category_id 1 Pens 2 Animals 3 Rocks 4 Horses 2 5 Dogs 2
When I currently run the query I get: Name TotalStock Pens 20 Animals 30 Rocks 40 Horses 50 Dogs 60
It's technically correct because I don't want the parent to calculate the total value of the children
What Im really trying to do is order them within the hierarchy though and indent (if possible) the result so I would get