I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?
I have 3 tables, a supplier table, a types table and a relationship table between the two..I want to build a query that put the different types in columns, and use a Boolean value to identify if the supplier supplies that type.
CREATE TABLE Person ( PersonID INT Name varchar(50), HireDate datetime, HireOrder int, AltOrder int )
Assume I have data like this
INSERT INTO Person VALUES(1, 'Rob', '06/02/1988', 0, 0) INSERT INTO Person VALUES(2, 'Tom', '05/07/2016', 0, 0) INSERT INTO Person VALUES(3, 'Phil', '01/04/2011', 1, 0) INSERT INTO Person VALUES(4, 'Cris', '01/04/2011', 2, 0) INSERT INTO Person VALUES(5, 'Jen', '01/04/2011', 3, 0) INSERT INTO Person VALUES(6, 'Bill', '01/05/2011', 0, 0) INSERT INTO Person VALUES(7, 'Ray', '01/23/2012', 0, 0)
I'm trying to simplify my requirement... providing the input of HireDate, HireOrder, and AltOrder, I need to be able to pick up the next person
For ex:, if I provide input, HireDate: 06/02/1988, HireOrder:0, AltOrder:0, the return value expected is "Tom" because he is the next person after the provided input.
For ex:, if I provide input, HireDate: 05/07/2016, HireOrder:0, AltOrder:0, the return value expected is "Phil" because he is the next person after the provided input. Though Phil and Cris have same dates, their HireOrder takes precedence in this case. If they also have same HireOrder, AltOrder would be coming in picture to determine next person
Another ex: if I provide input, HireDate: 01/04/2011, HireOrder:1, AltOrder:0, the return value expected is "Cris" because she is the next person after the provided input. Here hireorder determines.
If I provide, HireDate: 01/23/2012, HireOrder:0, AltOrder:0, as there is no person after this, I should be able to pick the first person on the list - in this case Rob.
I can write some business logic in front-end, but I thought it would be good, if I can move this to a stored procedure which can return me the PersonID for optimal performance.
I have tried writing various conditions but couldn't achieve a query that meets all my requirements here.
I'm even fine if my last condition is not met (returning the first person in the list, in case no one is available after the provided input).
I am trying to create a calendar style report that will have 12 months (as columns) and store opening listing in rows. I have created a matrix, but the problem that I have is that the store opening listing displays in the right period, but they are not in any order. I would like to have the openings always on top, right under the header in the matrix. Now I have them scattered randomly all over the matrix. I tried numerous way of sorting and that does not work.
I am attaching a sample of what I would like to accomplish (months are columns).
Can we create the Partition on Existing Table?e.g Create table t ( col1 number(10,0), Col2 Varchar(10)) ;After the table Creation can we alter the table to partition the table.
Hi,I have been trying to download the latest update to SQL Server BOL forquite some time without any success. I have tried downloading around3-4 times. Every time (size of msi file would be around 25 KB) I try toextract the msi file, I get an error message to the effect that 'it isnot a valid windows installer package'. Has anyone come across thisproblem?Have been trying to download from this URL -http://www.microsoft.com/downloads/...A6F79CB1-A420-445F-8A4B-BD77A7DA194B&displaylang=enThanks,Harish*********************************Long way to go before I sleep ..*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.
After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.
The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.
Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.
For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?
Let's say I have a function named MyCustomFunction and I want to ensure that it exists in the database. Let's say I have a create script for the function. I want the script to be runnable multiple times if needed. A common way to do this is to check for an object_id at the top of the function like this:
IF OBJECT_ID('MyCustomFunction') IS NULL DROP FUNCTION MyCustomFunction GO CREATE FUNCTION MyCustomFunction...
But is there a more elegant way to do this? For example, instead of dropping and recreating the function, is there a way to simply exit from the script and do nothing if the function already exists? Something like this:
IF OBJECT_ID('MyCustomFunction') IS NULL RETURN GO CREATE FUNCTION MyCustomFunction...
We have a SQL 2008 Instance existing on active/passive cluster with 2 nodes running on Windows server 2008 R2 Ent. Edn.
Now we need to install another SQL instance on this cluster. So what are the prerequisites apart from new IP address? Do we need shared disks or can the existing disks can be utilized?
I'm modifying a report that uses a date parameter as a report filter. The original report had no restrictions on what dates may be entered, so it displayed a neat Calendar picker tool for use in selecting the date for the parameter.
Thing is, this new version needs to have the dates limited to only those available in the source data. So, when I provide a query to describe the available values in the parameter properties window, instead of the nifty Calendar picker, I get a texbox dropdown list. [insert sad sound here]
I was hoping that it would still provide the Calendar picker, but with available dates highlighted in bold or some color or the unavailable dates greyed out, something along those lines; not an unimaginative dropdown list. To define the available values, I use a very simple query;
SELECT DISTINCT Load_Date FROM Census ORDER BY Load_Date
Is there a way to get it to display a Calendar tool rather than the dropdown list, if the parameter is given a list of available dates?
How can I find calls which do not exist in stored procedures and functions?We have many stored procedures, sometimes a stored procedure or function which is called does not exist. Is there a query/script or something that I can identify which stored procedures do not 'work' and which procedure/ function they are calling?I am searching for stored procedures and functions which are still called, but do not exist in the current database.
I need to recover some data in a table but i'm not 100% sure the right way to do this safely.
I'll need to query the two tables to compare the before and after but how do i go about restoring/attaching the backup database to SQL without causing conflicts?
If I restore, I assume this would just overwrite which is obviously the worst thing that can happen. if i attach the backup, how does this affect the current live DB? how do i make sure that it's not getting accessed and mistaken for the live DB?
if I try to install SP 2 using SQL Server authentification (sa) it fails. The following lines appear in the file "Hotfix.log":
01/03/2007 14:08:23.859 Authenticating user using SAPWD 01/03/2007 14:08:23.875 Validating database connections using Windows Authentication 01/03/2007 14:08:24.171 Pre-script database connection failed - continuing to wait for SQL service to become responsive to connection requests ... repeated 60 times ... 01/03/2007 14:13:33.625 The following exception occurred: SQL Server reagiert nicht vor der Skriptausführung Date: 01/03/2007 14:13:33.609 File: depotsqlvaultstablesetupmainl1setupsqlsesqlsedllinstance.cpp Line: 1411
Why does it try to use Windows Authentification although I have told it to use SQL Server Authentification (Windows Authentification has been disabled in this Database Instance)?
Recently I have come across a requirement where i need to design a table.
There are some columns in table like below with DECIMAL Datatype:
BldgLength
BldgHeight
BldgWeight
Based on my knowledge, i know that values before Floating-Point will not be more than 4 digits.
Now as per MSDN,
Precision => 1 - 9 Storage bytes => 5
so i can create column as:
BldgLengthDECIMAL(6,2) DEFAULT 0
OR
BldgLengthDECIMAL(9,2) DEFAULT 0
Now while reading some articles, i came to know that when we do some kind of operation like SUM Or Avg, on above column then result might be larger than current data type.
So some folks suggested me that i should keep some extra space/digits considering above MATH functions, to avoid an Arithmetic Over Flow error.
So my question is what should be value of DataType for above column ?
The query Im running so far is wrong, but here it is...
SELECT t.FromUserID, t.ToUserID, t.msg, u.UserName AS UserFrom, u.GroupID AS FromGroup, u2.UserName AS UserTo, u2.GroupID AS ToGroup FROM tmp_Messages t LEFT JOIN (SELECT UserID, GroupID, UserName FROM tmp_users WHERE GroupID = 3) u
[Code] .....
im missing the details of one of the users.I know what the problem is, I just cant figure out how to get this working without using temp tables, which I cant do in the production version.
how to use like operator select statement to retrieve multiple column names in sql server DB...for ex: I have a table say employees where in I want to get all column names like emp_,acc_ etc using '%' And what is this below query used for?
SELECT column_name as 'Column Name', data_type as 'Data Type', character_maximum_length as 'Max Length' FROM information_schema.columns WHERE table_name = 'tblUsers'
I'm able to successfully import data in a tab-delimited .txt file using the following statement.
BULK INSERT ImportProjectDates FROM "C: mpImportProjectDates.txt" WITH (FIRSTROW=2,FIELDTERMINATOR = ' ', ROWTERMINATOR = '')
However, in order to import the text file, I had to add columns to the text file to match the columns that exist in the table. The original file is an export out of another database and contains all but 5 columns from my db.
How would I control which column BULK INSERT actually imports when working with a .txt file? I've tried using a FORMAT FILE, however I kept getting errors which I tracked down to being a case of not using it with a .txt file.
Yes, I could have the DBA add in the missing columns to the query from the other DB to create the columns, however I'd like to know a little bit more about this overall.
I have multiple databases in the server and all my databases have tables: stdVersions, stdChangeLog. The stdVersions table have field called DatabaseVersion which stored the version of the database. The stdChangeLog table have a field called ChangedOn which stored the date of any change made in the database.
I need to write a query/stored procedure/function that will return all the database names, version and the date changed on. The results should look something like this:
I have a table of Customers & their data in about 20 Columns.
I have another table that has potential Customers with 3 Columns.
I want to append the records from Table 2 onto Table 1 to the Columns with the same names.
I've thought of using UNION ALL or Select Insert but I'm mainly stuck on the most efficient way to do this.
There is also no related field that can be used to join the data as these Customers in table 2 have no Customer ID yet as they're only potential Customers.
Can I just append the 3 columns from Table 2 to the same 3 columns in table 1?
I'm using MS SQL Server 2008 and I'm trying to figure out if it is possible to identify what tables / columns contain specific records.
In the example below information generated for the end user, so the column headers (Customer ID, Customer, Address, Phone, Email, Account Balance, Currency) are not necessarily the field names from the relevant tables, they are simply more identifiable headers for the user.
Customer ID CustomerAddress Phone Email Account Balance Currency js0001 John Smith123 Nowhere Street555-123-456 jsmith@nowhere.com-100 USD jd2345 Jane Doe 61a Down the road087-963258 jdoe@downthe road.com-2108 GBP mx9999 Mr X Whoknowsville 147-852369 mrx@whoknows.com0 EUR
In reality the column headers may be called eg (CustID, CustName, CustAdr, CustPh, CustMail, CustACBal, Currency).
As I am not the generator of this report, I would like to know whether or not it is possible to identify the field names and / or what tables they exist in, if I were to used the report info to search for it. For example, could I perhaps find out the field name and table for "jd2345" or for "mrx@whoknows.com", because the Customer ID or Email may not be what the actual fields are called.
I'm not a DB admin and I don't have rights to do a stored procedure on the server. I'm guessing what I want is not so simple to do, but is it possible to do via a query?
I would like to pull data from two seperate columns based on the vaule for MakeFlag. So if MakeFlag = 0 I would like the description to show but anything else I would like catalog description to show up.
I have a matrix report with STORES in the row group and DATES in the column group. The table sums on SALES. The DATES column is formatted like =format(Fields!DATES.Value, "MMM yyyy"). The table also has 2 parameters @Start and @End. This all works great but I then added a child report so that the user can click on the SALES value for any sale by month and store. The child report uses the @Start and @End parameters from the original report but this is where I run into problems.
Rather than bringing me the sales details for a particular store and month it brings back everything from the time period selected with the original date parameters. So say I originally selected 2015-01-01 to 2015-06-30 with the parameters when I select on FEB 15 in my matrix report I get Febs data along with all the other months ie Jan-Jun 15. The DATES fields in both reports are in the same date format - in fact both reports use exactly the same dataset.
I realize it's something to do with the formatting of the DATE field not being recognized in the linked report.