Ok some company has handed me this .xls file containing a 1000+ users -- their emails (which are to be their user names), and their passwords. Both are in plain text format. I want to add these users to the ASPNET_DB, with the condition that the passwords and userids are encrypted, as they are in the table.
I have been given a Product table whoes all column types are varchar(8000) One of the column is Price and other is DecimalPosition. Price column includes price without any decimal place and the data in DecimlaPosition column determins where the decimal should be placed. So for instance, if the Price column includes '1000' and DecimalPosision includes '2' >> then it means that the actual price for this product is '10.00' and NOT '1000'. Similarly, if the DecimalPosision includes '3' >> then it means that the actual price for this product is '1.000' and NOT '1000'My question is that when I am getting the price for a product from this table, how can I get the price in the correct format, e..g like '10.00' and not '1000'Should I use SQL statements to convert 1000 into 10.00 or should I use some sort of programming logic to convert 1000 into 10.00.kind regards
I have started to install a 3rd party web-based product for our clients that uses SQL 2000 as its backend
However every time that they create a new 'topic' within the web app, it creates a new database, with a single table in it!! - There could/will be 1000's of these 'topics' created
I have told the company we buy this from that this is not acceptable - they have asked why!!
Can anyone point me in the direction of preferably a Microsoft document that I can send to them, as just saying 'you just don't do it that way' isn't working and I can't find anything easily myself
if l want to commit the transactions after every thousand how would l build it into the script?
Begin Transaction
Select a.AccountNo, a.TransactionNo, a.TransactionAmount, a.TransactionDate Into dbo.test1
From Trans_May_14Aug2002 a,Reds_JuL_Trans_08Jul2002 b
Where ltrim(rtrim(left(a.AccountNo,20)))=ltrim(rtrim(lef t(b.AccountNo,20))) AND ltrim(rtrim(left(a.TransactionNo,20)))=ltrim(rtrim (left(b.TransactionNo,20))) AND a.TransactionAmount=b.TransactionAmount AND a.TransactionDate =b.TransactionDate AND ltrim(rtrim(left(a.Product,20))) IN ('PR060','PR061','PR091', 'PR096','PR111','PR121', 'PR122')
AND ltrim(rtrim(left(a.Transactiontype,20))) IN ('TR001','TR003','TR011', 'TR013','TR027','TR028', 'TR042','TR043','TR044', 'TR045','TR998','TR999')
AND ltrim(rtrim(left(a.journaltype,20))) NOT IN ('JT000','JT720','JT721', 'JT722','JT723','JT725', 'JT726','JT729','JT730', 'JT737','JT738','JT739', 'JT740','JT743','JT746', 'JT751')
OR ltrim(rtrim(left(a.JournalType,20))) IS NULL
AND a.TransactionDate > '2002-04-30'AND b.transactionDate < '2002-07-01'
I was wondering if it is possible to have a DB table with 1000 columns? The other way is of course to break these columns into 1000 rows and an ID which tells what exactly does it relate to.
I want to know the pros and cons of having 1000 columns/rows for one set of related data. The reason to need 1000 columns in the first place is that there are about 1000 questions in a set whose answers need to be saved for one session (hence all should go together).
Can anybody shed some light on it? Has anybody tried something so crazy before?
We have an existing SSRS server, and have just created a new child domain. We'll be migrating users from the parent to the child, and want to add the users of that new domain with access to SSRS. In the parent domain they are able to access, but after migration with the child domain account, they cannot.
I have added the group CHILDDomain Users with a system user role on SSRS, and PARENTDomain Users was already there.
Is there any additional step I should/could take to get this active?
I have had this issue just pop up. I have local users who can connect fine, but my users that require connection by VPN cannot connect. I get the server not available or access denied error. I did confirm that the VPN'ers are connected to the network correctly and can see that their shares and mappings are correct. Any ideas? Thanking you all in advance!!
I am trying to revert back to Windows 7 after upgrading to Windows 10, however it will not let me and the following message occurs: "Remove new accounts.Before you can go back to a previous version of Windows, you'll need to remove any user accounts you added after the most recent upgrade. The accounts need to be completely removed, including their profiles.You created one account (NT SERVICEMSSQLSERVER) Go to Settings> Accounts> Other users to remove these accounts and then try again".However I did not create any new users and there are no other users listed in the Accounts section.
[crossposted]Hi, I wonder if anyone might lend me a brain.I have a stock database to build that covers over 1000 products, whichmight be said to exist in around 50 product families.Obviously, just to be awkward all the types of stock will havedifferent attributes. So one product might be a tube withinside/outside diameter and length and another a T shaped cable joint.All I can come up with is a separate table for each stock type familyand store the table name and product code in the main stock table, so:Tables:ProdAProdBProdCStockStock attributes:ProdIdProdTableAmountDateetc..ProdA attribute:ProdIdAttributeXAttributeYAttributeZetc..Then use code to parse the table and product ID to select the correctquery to get the product details. BUT This seems awefuly inelegant andpotentially wrong so I'm loathe to continue down this route.Can anyone tell me the "right" way to do this, I feel sure it must bea classic db design exercise, but unfortunatly one they didn't teachus at University -- or maybe I was asleep...Thanks!
Ok, so I have a primary table that contains the count of a linked table. Since I can let the identity update itself, I should be able to insert the same values into the linked child table.
I need to put the same record in the child table 1000 (that's an arbitrary number, this will be programatically determined by the user) times.
I understand that I can do this:
INSERT INTO SomeTable (Cols) VALUES (vals) GO 1000
However, I can't make this work when doing an ADO.NET ExecuteCmd. It doesn't like the Go 1000 part of it.
Does anyone have an idea how I can do this VERY QUICKLY without having to execute a separate insert for every item (or batch them)?
The number could be as high as 250,000 in the child table.
hi alli've got two tables called "webusers" (id, name, fk_country) and "countries" (id, name) at the meantime, i've a search-page where i can fill a form to search users. in the dropdown to select the country i included an option which is called "all countries". now the problem is: how can i make a stored procedure that makes a restriction to the fk_country depending on the submitted fk_country parameter?it should be something like SELECT * FROM webusers(if @fk_country > 0, which is the value for "all countries"){ WHERE fk_country = @fk_country} who has an idea how to solve this problem?
Enterprise SQL Projects --------------------------- When Design is replaced with an Architectural Plan
The following post is intended as a starting point of some main concepts to consider when dealing with ent. sql projects. While it is not a direct question of any kind, it would interest people that are/or was involved in ent. projects and therefore have been troubled with similar problems.
Here is a quick overview of a couple main concepts when you have to deal with a Ent. Projects with 1000+ stored procedures.
DOCUMENTATION: It is an absolute must to include 100% explanatory code on top of the sps.
FUNCTIONS: Use functions to the maximum extent to reduce overal stored procedure complexity a rule of thumb is to have 1 to 10, functions to sps ratio or simmilar.
TRIGGERS: A lot to say about them that cannot be covered in this context
NAMING CONVENSION: Your naming convension should be 100% pre-thought and designed, no mistakes allowed in this context as it will cause all stored procedures to be extremely difficult/impossible to browse.
a quick template could look as this:
sp module name underscore(_) action (lower case) noun (proper case)
For example: spOrders_putOrderDetail spMaintainUsers_deactivateUser spReports_getZeroInventory
(quoted by: tmorton)
my addition to this would be something like: sp< as a prefix is surtently an overkill when dealing with 1000+ sps and is not needed.
however a lot more complex naming strategies can be used, that will cause the project to be a lot more easy to maintain.
I am trying to transfer 90 million records/250 bytes row length from oracle 8i to sqlserver 2000 using DTS and it is taking 2 seconds to transfer 1000 records. Is there any way I can transfer 90 million records fast at all. This will take more than 10 hours to transfer it.
I have two queries that generate two different datasets. One is a count of memebers, and the other is count of admits. I need to generate a calculated field from the two data sets called admits per 1000, which is essential the count of admits/counts of members *12000 I was able to calculte admits per 1000 easily in excel, however I need some insight on how to do is SQL.
Below are my queries from the two datasets.
MemberMonths dataset: Select factMembership.BusinessUnitCode, EffectiveCCYYMM, ISNULL(count(Distinct MemberId),0) As MemberCount From factMembership
[Code] ....
Admits dataset:
SELECT Factadmissions.BusinessUnitCode, factAdmissions.AdmitCCYYMM, ISNULL(Count(AdmitNum),0)As [Count of Admits] FROM factAdmissions
It seems that there is some kind of limitation in SQL Server agent job log. For any job there is not more than 1000 rows of history. This can be seen with the following query:
SELECT j.name, count(*)
FROM sysjobhistory jh,
sysjobs j
WHERE jh.job_id = j.job_id
GROUP BY j.name
HAVING count(*) >= 1000;
How to modify the limitation? We really need more than 1000 rows.
We have upgraded our SQL 2008 server to 2012 last month. i've noted since then that many tables that have auto increment primary key bigint field increases by 1 like it should, then for some reason it jumps up by 1000 or even 10000? i have seed adn auto increment set to 1 but doesnt effect it. Is there anything that could be causing the next value to jump that much?
I have an PL/SQL procedure @ Oracle database which extracts 10000 rows from a table and Now I have load all the 1000 rows into SQL 2005 tables.
I have extracted the data from oracle into DataAdapter/dataset , Now I want to load all the rows to SQL 2005 tables. Please help how I can load..
If I use insert statement everytime , it makes server busy and takes much time for 10000 inserts to complete(even using Procedure goes heavy since for every insert have to call this).
Is there any possibility that i can pass the REF Cursor / Dataset/dataAdapter into SQL stored so that inserts will have happen all together ??
Hi,Currently we're a building a metadatadriven datawarehouse in SQLServer 2000. We're investigating the possibility of the updatingtables with enormeous number of updates and insert and the use ofcheckpoints (for simple recovery and Backup Log for full recovery).On several website people speak about full transaction log and thepace of growing can't keep up with the update. Therefore we want tocreate a script which flushes the dirty pages to the disk. It's notquite clear to me how it works. Questions we have is:* How does the process of updating, insert and deleting works with SQLServer 2000 with respect to log cache, log file, buffer cache, commit,checkpoint, etc?What happens when?* As far as i can see now: i'm thinking of creating chunks of data of1000 records with a checkpoint after the Query. SQL server has thedefault of implicit transactions and so it will not need a commit.Something like this?* How do i create chunks of 1000 records automatically withoutcreating a identity field or something. Is there something like SELECTNEXT 1000?Greetz,Hennie
"Database XYZ has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files."
Iam attempting to generate files containing more than 1000 characters per line by outputting the results of a stored procedure via osql to a flat file. Osql (and isql) appear to force a newline after 1000 characters, even when specifiying a -w2000 parameter.
I have also tried to output the results of the stored procedure via DTS and this appears to do the same thing!
Does anybody know how to prevent osql (or isql) from forcing the newline?
I have a table set of records. Its contains some customerID,SportsGoods,Price in different datetime. I want to add customer spent. If crossed 1000 means i have to show purchase time when it is crossed 1000. I need query without while and looping.
Hi.We need to create a view of our active directory users (we have 2500).I found out that there is max page size of 1000, so we cannot get moredata.Anyone found a solution to that problem?Thanks
It seems when I run the query with the set staticts IO on then statistic reports back with the 'work table', and the query takes 30+ sec. if the worktable is ommited(whatever the reason?) the query take less 1 sec.
Here is my take, I believe work table is created in tempdb...and if not then whole query is using the cached page, am I right?
if I am right then the theory is, if I increase the (via sp_configure) server min memory setting and min query memory, the query ought use the cached page and return in less 1 sec. (specially there is absolutely no one but me on the server), so far I can't make it go faster...what setting am I missing to make it run faster?
Another question is if the query can not avoid but use the tempdb, is it going to always be 30 sec+ time? why is tempdb involvement make it go so much slower?
Is it safe to backup while the database is running 1000 transaction/sec.? If yes - why should I buy Veritas Backup Exec Server Edition and Veritas Backup Exec Online Backup Pack ?
We just switched from Sql server 2008R2 to Sql server 2012.I am facing one problem with identity Columns "When ever i restarts my sql server,the seed value for each identity column is increased by 1000 (For int identity column it is 1000 and for big int it is 10000).
"For Example if seed value of any table was 3 then after restarting sql server will be 1003 if i again restart sql server it will be 2003 and so on."
After searching on google i found that it is a new feature (don't know what is use of it) in sql server 2012 and having only two solution if you want old identity concept
1. Use sequence object -
a) I am using same database in sql server 2008 and 2012 both so can't use sequence in 2008.
b) if i go with sequence then need not change save procedure for each table,which is bulky task for us.
2. Use Trace Flag 272 (-T272)
I can go with this solution because there is need not do any changes in my application.Some one suggested me that add -T272 in startup parameter,after this sql server identity column will work normal as previous version.I did the same but it is not working.
I don't want to do any changes in my database structure.
how to use this -T272 or why it is not working.
I don't want to use this new identity feature how to suppress it. Why -T272 is not working.
Im trying to upload 1000 txt files into one table in SQL. I'm using the following query, to upload one txt file at a time:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (4).txt' with (firstrow = 2, lastrow = ???, fieldterminator = ';', rowterminator = '0x0A')
I'm trying that the query skip the last row because gives me the following error:
Msg 4866, Level 16, State 1, Line 1 The bulk load failed. The column is too long in the data file for row 1, column 17. Verify that the field terminator and row terminator are specified correctly. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
know a command to skip the last row, something like lastrow= all-1...or something like that.
I also executed using MAXERRORS command...like this:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (15).txt' with (firstrow = 2, fieldterminator = ';', MAXERRORS = max_errors, rowterminator = '0x0A')
does not recognize MAXERRORS command, also tried to put a number of error instead of max_errors.