In MS Access, for numeric fields, the decimal places shown can be defined as "Auto" meaning that the database will determine the number of decimal places to show based on the content of the field (i.e. 1.0, 0.75, 1.125).
In SQL Server for the same field, it appears that decimal precision is hard coded resulting in a fixed representation (i.e. 1.000, 0.750, 1.125)
Is there a way to make the decimal representation in SQL Server more like Access where trailing zeros are truncated?
when I run below query I got Error of Arithmetic overflow error converting numeric to data type numeric declare @a numeric(16,4)
set @a=99362600999900.0000
The 99362600999900 value before numeric is 14 and variable that i declared is of 16 length. Then why this error is coming ? When I set Length 18 then error removed.
I'm getting the above when trying to populate a variable. The values in question are : @N = 21 @SumXY = -1303765191530058.2251000000 @SumXSumY = -5338556963168643.7875000000
When I run, SELECT (@N * @SumXY) - (@SumXSumY * @SumXSumY) in QA I get the result OK which is -28500190448996439680147097583285.072256 ie 32 places to left of decimal and 6 to the right When I try the following ie to populate a variable with that value I get the error - SELECT R2Top = (@N * @SumXY) - (@SumXSumY * @SumXSumY)@R2Top is NUMERIC (38, 10)
My company is working on a bond derivative portfolio analysis tool andwe're facing a problem that I did not see adequately addressed anywhere in literature. I really did RTFM. I'm very experienced inrelational modelling (10+ years) so this is not a case of notunderstanding the principles. Here is the problem stripped ofirrelevant context. The problem below is simplified for the sake of theexample so don't sweat the details.THE PROBLEM1. There are many types of bonds, each type has a different set ofattributes, different attribute names, different attribute datatypes.For example, bond A has two variables: a yearly interest rate anddate of issue, B has five variables: an interest rate and 4 specificdates on which various portions of principal need to be paid, bond Chas a set of 4 variables: interest rate in period 1, interest rate inperiod 2, the date on which the bond can be put back to the issuer,and two dates on which the bond can be called by the issue. And so on.So, on the first attempt I could represent each bond type as its owntable. For example,create table bond_type_a (rate INTEGER, issue_date DATE)create table bond_type_b (rate INTEGER, principle_date1 DATE,principle_date2 DATE, principle_date3 DATE, principle_date4 DATE)create table bond_type_c (rate1 INTEGER, rate2 INTEGER, put_date DATE,call_date DATE)This is the nice relational approach but it does not work because:2. There are many thousands of bond types thus we would have to havemany thousands of tables which is bad.3. The client needs to be able construct the bond types on the flythrough the UI and add it to the system. Obviously, it would be bad ifeach new type of bond created in the UI resulted in a new table.4. When a user loads the bond portfolio it needs to be very fast. Inthe table per type approach if a user has a 100 different types if bondin the portfolio you would have to do 100 joins. This is a heavilymulti user environment so it's a non-starter. It's impossibly slow.THE SOLUTIONSSo now that we ditched the table per bond type approach we can considerthe followiing solutions (unpleasant from the relational point ofview):1. Name-Value pairs.create table bonds (bond_id INTEGER, bond_type INTEGER, attribute_idINTEGER, value VARCHAR(255))Comment: The client does not like this approach because they want torun various kinds of reports and thus they doe not want the values tobe stored as VARCHAR. They want the DB to enforce the datatype.2. Typed Name-Value pairs.create table bonds (bond_id INTEGER, bond_type INTEGER, attribute_idINTEGER, int_val INTEGER, string_val VARCHAR(255), date_val DATE_Comment: The client does not like this because the table is sparse.Every row has two empty fields.3. Link table with table per data type.create table bonds (bond_id INTEGER)create table bond_int_data (bond_id INTEGER REFERENCES bonds(bond_id),value INTEGER)create table bond_string_data (bond_id INTEGER REFERENCESbonds(bond_id), value VARCHAR(255))create table bond_date_data (bond_id INTEGER REFERENCES bonds(bond_id),value DATE)Comment: This meets most of the requirements but it just looks ugly.4. Dynamic Mappingcreate table (bond_id INTEGER, int_val1 INTEGER, int_val2 INTEGER,date_val1 DATE, date_val2 DATE, string_val1 VARCHAR(255), string_val2VARCHAR(255))Then you have to add some dynamic mapping in your code which willprovide bond specific mapping (say, stored in an XML file). Forexample,For bond_A: yearly_rate maps to int_val1, issue_date maps to date_val1For bond_C: rate1 maps to int_val1, rate2 maps to int_val2, put_datemaps to date_val1, call_date maps to date_val2)Comment: This is very good for performance because when I load aportfolio of different bond types I can pull them all in in one SELECTstatement. However this approach has a problem that the table issparse. The number of fields of each type has to be as high as toaccmodate the most complex bond while simple bonds will only be usingtwo or three.THE QUESTIONS:Are the four approaches I described above exhaustive? Are there anyother that I overlooked?
Hi,I want to get the string representation of a hex number from avarBinary column of a table.For example I want to get the output : 'The Hex value is 0xFF'butselect 'The Hex value is ' + convert(varchar(10), 0xFF)retruns the ascii charecter for 0xFFAny idea how do I get the hex form as it is?thanks.Supratim
I am trying to take a hexadecimal representation of a binary number and convert to the true binary representation so that I can compare it against a binary field in one of my tables.
After reading the documentation it seems I should be able to do this with the CAST or CONVERT function. However it does not appear to be working correctly.
Can you tell me why this T-SQL code produces the wrong binary value:
Here is my sample table creation and insertion script. I want represent as a summary-result using just one record. How can I ? Please note the below red color text.
create table policy ( id int not null, name nvarchar(50) not null, constraint pk_policy primary key(id) );
create table localhost ( policy_id int not null, id int not null, ip_begin binary(4) not null, ip_end binary(4) not null, prefix tinyint not null default 0, constraint pk_localhost primary key(policy_id,id) );
create table remotehost ( policy_id int not null, id int not null, ip_begin binary(4) not null, ip_end binary(4) not null, prefix tinyint not null default 0, constraint pk_remotehost primary key(policy_id,id) );
create table rate ( policy_id int not null, inbound int not null, outbound int not null, constraint pk_rate primary key(policy_id) );
------------- insert into policy values(0,N'policy0');
insert into localhost values(0,0,0xC0A80101,0xC0A80101,24); insert into localhost values(0,1,0xC0A80A01,0xC0A80B01,0);
insert into remotehost values(0,0,0xACA80101,0xACA80101,0); insert into remotehost values(0,1,0xACA80A01,0xACA80B01,0); insert into remotehost values(0,2,0xACA80C01,0xACA80C01,24);
insert into rate values(0,1000,2000);
------------- select * from policy; select * from localhost; select * from remotehost; select * from rate;
-- result of policy table 0 policy0
-- result of localhost table 0 0 0xC0A80101 0xC0A80101 24 0 1 0xC0A80A01 0xC0A80B01 0
-- result of remotehost table 0 0 0xACA80101 0xACA80101 0 0 2 0xACA80C01 0xACA80C01 24
-- result of rate table 0 1000 2000
------------------------------------------------------ desired result set id name localhost remotehost rate -- ---------- ---------------------------------------------------------------- ------------------------------------------- ----------------- 0 policy0 (192.168.1.1/24),(192.168.10.1~192.168.11.1) (172.168.1.1),(172.168.12.1/24) 0,1000,2000
descriptin of desired result set : 0. key value is id column and is referenced by each policy_id column. 1. id and name column is the same as policy table's. 2. localhost and remotehost columns are intigration of ip_begin,ip_end,prefix. 3. ip_begin and ip_end should be converted dotted presention from numeric format. 4. if prefix greater than 0, it should be displayed using '/'. 5. if ip_begin and ip_end are equal, show just one ip. 6. if the two ips are different from each other, they separated by '~'. 7. rate field is packed divided by ','.
Our customer (of our ecommerce system) wants to be able to preservedeleted entities in the database so that they can do reporting,auditing etc.The system is quite complex where each end user can belong to multipleinstitutional affiliations (which can purchase on behalf of the user).The end user also has a rich trail of past transactions affiliationsetc. Thus in the schema each user entity is related to many otherswhich in turn relate to yet others and so on.In the past when a user was deleted all of his complex relationshipswere also deleted in a cascading fashion. But now the customer wantsus to add a "deleted" flag to each user so that a user is never_really_ deleted but instead his "deleted" flag is set to true. Thesystem subsequently behaves as if the user did not exist but thecustomer can still do reports on deleted users.I pointed out that it is not as simple as that because the user entityis related to many, many others so we would have to add this "deleted"flag to every relationship and every other entity and thus have"deleted" past purchases, "deleted" affiliations - a whole shadowschema full of such ghost entities. This would overtime degradeperformance since now each query in the system has to add a clause:"where deleted = 0".I assume this is a standard problem since many organizations must havethis need of preserving deleted records (for legal or other reasons).I tried to talk them into creating a simple audit file where all thedeletions will be recorded in XML but they were not too happy withthat.Is there a more satisfying solution to this than have this "deleted"flag?Thanks for your help,- robert
I'm using a DTS package to import a large CSV file. There is a particular column that contains text or numbers. I want to delete the row if that column has a number, I've used IsNumeric in the selection portion of the statement, but can't figure out how to use it as part of my where clause.
I have a lot of decimals in the flat files. So far the largest numbers map to a numeric(18,6).
My question is, is DT_NUMERIC the correct datatype for this data? In which case, what size do I need to set it? Right now it's 18. Couldn't find much info on this.
I am trying to insert some values into a table where the column is of the data type "numeric". The insert works fine.Update does not work. Update BUT_BREAKDOWN_PCT SET BDP_EFFORT_BREAKDOWN_PCT=0.15 WHERE BDP_BREAKDOWN_ID =1 AND BDP_PHASE_ID = 3 AND BDP_START_EFF_DT = '12/31/2004' BDP_EFFORT_BREAKDOWN_PCT is a numeric column with a size 5 (4,3) When I do the updatedirectly from QA, it works fine. I was googling it and read a KB article saying it's a problem with Service Pack of SQL Server 2000. If it is, then the query should not work even from QA....isn't it? Anyone had this problem before? Please help.
Hello,I need to be able to select only the numeric data from a string that isin the form of iFuturePriceID=N'4194582'I have the following code working to remove all the non-numeric textfrom before the numbers, but it is still leaving the single quote afterthe numbers, i.e. 4194582'Any ideas or suggestions how to accomplish that? Thanks in advance.Declare @TestData varchar(29)Set @TestData = "iFuturePriceID=N'4194582'"Select Substring(@TestData, patindex('%[0-9]%', @TestData),Len(@TestData))TGru*** Sent via Developersdex http://www.developersdex.com ***
I have some data which I am trying to put into a DM where I can use it as part of a cube (my first!!)
I have hit a small problem with dates, I get it from the ERP system as a numeric field, and I need to convert it to a date format. The intension is to use this converted data with Named Calculations to derive Year, month Day ect.
However I cannot seem to be able to convert and store (in SQL) this column can anyone advise
Hi i am using SQL SERVER CE database. I have many tables having numeric fields. When i am inserting data through frontend all the float data is getting rounded. I want all data in float with point . How can i do that??????????
I'm getting some data from a flat file with a SSIS Package, it comes a integer but I would like to converted to a decimal with a 3 scale. Example: Flat File: 2070015000950011800 In the data conversion I had it with a 3 scale, but what I got was this:20700.00015000.0009500.00011800.000But what I want is something like this:20.70015.0009.50011.800 I dont know if you guys get the idea. But I will apreciate if anyone can help me. Thanks, Erick
On the Microsoft website, I am trying to understand the how the decimal and numeric data types work. It says that valid values for precision are:
- 10^38 +1 through 10^38 - 1
I don't understand the purpose of the negative sign before the first 10.I understand that decimal (p,s) refers to haing a total of p digits (before and after the decimal) and only s number of digits to the right of the decimal. How does the equation above relate to the 's' portion of the syntax?
the situation is that i have a column data comes from flat file and all i want to do is to check that the incoming column is numeric(12,3) and if the incoming data exceed that size "12,3" exception or redirect the row is happened.
the problem that i try to apply that with the data conversion or Derived column component but it in case of the scale of the incoming data exceed 3 the component trim until 3 scale.
i also try to perform it with the flat file data source component but i face a problem that if the data in the column is empty then flat file data source component read the numeric column as Zero
i hope someone help me coz i need to handle it soon.
I have a package that i am building right now and I need to filter out data from my employeeid field that is not an integer. How would i proceed with this? I currently have a conditional split filtering our employee id's that contain a dash.