in my original database have a column which is for "path" ,the record in this column is like → �mms://192.12.34.56/2/1/kbe-1a1.wmv】 this kind of column is about 1202,045 .. I don't think is a easy job to update by person.. it may work but have to do same job 1202,045 times..
I have to change � mms://192.12.34.56/2/1/kbe-1a1.wav】 to � mms://202.11.34.56/2/1/kbe-1a1.wav】 I tried to find the reference book and internet . can't find out the answer for this problem. can you help? or maybe is it a impossible job? thanks
I want to be able to return the rows from a table that have been updated since a specific time. My query returns results in less than 1 minute if I hard code the reference timestamp, but it keeps spinning if I load the reference timestamp in a table. See examples below (the "Reference" table has only one row with a value 2014-09-30 00:00:00.000)
select * from A where ReceiptTS > '2014-09-30 00:00:00.000'
select * from A where ReceiptTS > (select ReferenceTS from Reference)
Hi, I need to use a top and a join in the same sql. To get 10 top refnr from orders_refnr. That works fine to I use this: SQL = "SELECT TOP 10 refnr, antal = COUNT(refnr) FROM orders_refnr INNER JOIN produkter ON (orders_refnr.refnr = produkter.referensnummer) GROUP BY refnr ORDER BY antal DESC" But I need to be able to get information from more fields than the field refnr. How can I specify more fields? I need to get other fields from produkter. Please helt I´m really stucked.
I have gotten some criticism from coworkers regarding this test and just wanted to see what you guys think. I realize the wording could use improvement and any criticism towards making it easier to understand is much appreciated.FWIW - I had to solve this problem on the job so I feel it is a real-world test that helps me understand how people think and if they try to find alternate solutions.Thanks!~~~~~~~~~~~~~~~~~~~~Given a table that has over 100,000 records…SUBSIDIARY=========PARENT_IDINTCHILD_IDINTULTIMATE_PARENT_IDINTCLEANUP_INDBIT…where each PARENT_ID can have multiple CHILD_ID values, but the PARENT_ID should not equal the CHILD_ID. After an initial data load, the ULTIMATE_PARENT_ID and CLEANUP_IND columns contain NULL values (see page 2 for sample data).ULTIMATE_PARENT_ID is defined as the topmost parent in the chain for the particular CHILD_ID record, so if the chain was only 2-level’s deep the ULTIMATE_PARENT_ID is the CHILD_ID’s PARENT_ID’s PARENT_ID.Please write an answer for all three questions below:A)Which of the following queries should you run first?B)Write an optimized query to identify the ULTIMATE_PARENT_ID for each CHILD_ID and set its value into the ULTIMATE_PARENT_ID column.C)Write a query to identify ALL of the circular references and mark each record that is a circular reference by updating the CLEANUP_IND column to 1.~~~~~~~~~ Page 2 ~~~~~~~~~ Sample Data, remember though this table has over 100,000 records and the parent-child chain can go n-levels deep – where n is not known.PARENT_IDCHILD_IDULTIMATE_PARENT_IDCLEANUP_IND1024512NULLNULL362300NULLNULL887541NULLNULL10221024NULLNULL546887NULLNULL5122305NULLNULL112967NULLNULL697123NULLNULL901452NULLNULL2300666NULLNULL334445NULLNULL512903NULLNULL884554NULLNULL313313NULLNULL554884NULLNULL112119NULLNULL967555NULLNULL2305333NULLNULL33336NULLNULL541546NULLNULL10301020NULLNULL112999NULLNULL
hi, I have NT server which has drive c: 500 MB and drive d has 44 GB.
I know that the person who set up this server did not give enough space to the c drive, here is the problem. I am running sql server 7.0 which has 30 GB of data in the d drive. I need to reconfigure the NT hard drive so I can allocate 2 GB for C drive and 42 GB for D drive.
What is the best, safe method to accomplish this task.
After experiencing a hard drive failure i have reinstaled MSSQL7 on one drive and have a database which I need to recover on separate physical drive. How can I go about doing this?
Hi, I have ran 1. xp_fixeddrives and got the result drive MB free ----- ----------- C 1708 D 16311 2. I ran Backup Wizard in EM and able to see only above drives
3. But if ran backup in EM able to see more than 10 Drives(like C,D,H,I,J,M,N and etc). Why I can able to see those difference?. How do I find out exactly how many drives are there in this server without directly going to that server?. I appreciated your valuable answere. Thanks, Ravi
Hi, I'm looking for a way to check the free space left on the hard drives and then if needed send an alert to notify when we need to free up some space. I played around with the performance monitor and realized I could do it that way but I think you would have to leave the performance monitor running all the time and I'm not sure if I want to do that. I also read about the xp_fixeddrives proc that displays how much free space is available but then I don't know what to do from there? Does anyone have any recommendations for the best way to do this.
Right, I have this database that I need to sort, I'll give you an example:
ID Name Value 2312 Sega 200 5678 Blizzard 215 3412 Bullfrog 210 6798 Nintendo 195
Now, what I need to do is to sort it, perform calculations on and I need the list to be sorted with a predefined post as the top result, say like this one time:
ID Name Value 3412 Bullfrog 200 2312 Sega 210 5678 Alizzard 215 6798 Nintendo 195
as you can see sorting it alphabetically would lead to
5678 Alizzard 215 3412 Bullfrog 210 6798 Nintendo 195 2312 Sega 200
(or the other way around if you play with asc/desc) by id would be
2312 Sega 200 3412 Bullfrog 210 5678 Alizzard 215 6798 Nintendo 195
There aren't any top or bottom values sort of speak for the posts I want to be on top, so...how to sort this like this?
3412 Bullfrog 200 2312 Sega 210 5678 Alizzard 215 6798 Nintendo 195
the order after the top one is irrelevant.
Now...I know I could sort this by doing something like
"Select * FROM blablabla WHERE Name = 'Bullfrog'"
and then doing "Select * FROM blablabla" and then just bypassing that post in asp/php code or whatever, but that would be a pain for me to do as I have to perform some massive calculations and the code would be alot larger then needed be
Its a brain teaser allright...can you help me out?
I was wondering if anyone played around with changing the allocation unit size when formatting the hard drive the SQL server is running on. I would think that setting it higher to account for the larger size of the database files would help, but I'm not sure.
I have 2 harddisk in my computer and I have SQL 2005 Express on 1 of them (let's say C:), however, my C: is going to be full soon! Once it is full, is it possible to create a table on my other harddisk which the server can recognise?
I am using this stored procedure in sql. I have 6 tables. One is called employees. This is what I need to be able to do. A user enters a new employee into a winform, picks a role, division, manager, technicalskill set and applications from the drop down lists and hits save. The employee table should be the only one updated and has these columns only.( firstname, lastname, dvisionid, managerid, roleid,techskillsid, and appID). At the moment what is happening is its saving the firstname, lastname correctly, but the rest of the ID columns are null. It is updating the other tables with the string entered but what I need is the emplyee table to update with the corresponding ids. Is this alot more complicated then i thought? If I try to replace the role with roleid etc, it will just tell me I can't convert string to int which is understandable. How do I do this?
CREATE PROCEDURE sp_InsertEmployee @Firstname nvarchar(50), @Lastname nvarchar(50), @Role nvarchar(50), @Manager nvarchar(50), @Division nvarchar(50) AS BEGIN SET NOCOUNT ON;
INSERT INTO EMPLOYEES (FIRSTNAME, LASTNAME) VALUES (@FIRSTNAME, @LASTNAME) INSERT INTO [ROLE] ([ROLE]) VALUES (@ROLE) INSERT INTO MANAGER (MANAGER) VALUES (@MANAGER) INSERT INTO DIVISION(DIVISION) VALUES (@DIVISION) END GO
My C# code is like this:
SqlCommand sqlC = new SqlCommand("sp_InsertEmployee", myConnection);
Hello, I am experimenting with indexes and hope people can shed lighton some of my problems.I am using SLQ 2000 on Win 2000 Server. Using the following query fordiscussion;--------------------------------SELECT TOP 1000000E.EUN_Numeric, -- Primary KeyE.EUN_CODE, -- VarCharE.[timestamp] --,--E.Model -- Computed column (substring of EUN_CODE)FROM dbo.Z1_EUNCHK E--WHERE E.[timestamp]DATEADD ( wk , -48, getdate() ) AND-- E.[timestamp]< DATEADD ( wk , -4, getdate() )ORDER BY E.[timestamp] DESC-----------------------------------Problem 1) If I set up a single Index on the TimeStamp (plus the PK onEUN_Numeric) then there is not improvement in performance.It is only when I set up an Index on the Timestamp,EUN_Numeric,EUN_Codethen I get a good improvement. This is also thecase with the "where" clause added. I am using query analyser. Theimprovement is 14 secs to 3 secs (mainly with the removal of the sortprocess)Why?My expectation is that if my query uses [timestamp] column then surelyan index only on this is adequate.Problem 2) Introducing the simple computed column into the query takesthe time to 15 secs (with Sort processes involved).Why does revert back to sorting process when previous the index wasused ?Regards JC......
I have a 75 GB hard drive and a 300 GB. I want to mirror the 75 to the 300 and use the extra space as data storage. Is this possible if I partition the 300 and then mirror the hard drives.
How can I do this with vbscript, or C# ? - Copy backup files down from a network share, into the data directory of my local sql 2005 instance - perform a restore using the files copied from above - Execute a dts package
More info: Our databases are scripted and exist on the typical development, and testing enviroments. So as I get ready to start a new application, I want my local sql instance to be updated based on structure changes as well as data. So I have to apply the changes from the scripted sources and pull over the data. I would naturally like to automate this.
It was shipped with a 76 gig drive setup in RAID 1 (2 disk) and a 400 gig drive setup in RAID 5 (4 disk).
I would like to determine what is the best way to setup up the partitions. What size and what should be placed on each.
Like the C: Drive...Should I just put Windows on there and nothing else? Do I stand to gain something from not using part of that 76 gigs as a D: drive for my apps?
Is it possible to force parameters into the reports so enabling me to force a user id value into every report that is picked up from the list. The user ID is a system value and I don't want end users having any knowledge of it?
Hi!I'm desperating here!!! Two questions:1º Is line 13 realy selecting all the records with the username samurai (in this case)?!2º How do I fill the Boolean SeExiste var with a value from the record?1 Dim UserIDParameters As New Parameter 2 3 UserIDParameters.Name = "ProdUserID" 4 5 UserIDParameters.DefaultValue = "samurai" 6 7 Dim LoginSource As New SqlDataSource() 8 9 LoginSource.ConnectionString = ConfigurationManager.ConnectionStrings("ASPNETDBConnectionString1").ToString() 10 11 LoginSource.SelectCommandType = SqlDataSourceCommandType.Text 12 13 LoginSource.SelectCommand = "SELECT FROM aspnet_Users (FirstTime) VALUES (@UserIDParameters) " 14 15 Dim SeExiste As Boolean 16 17 SeExiste = LoginSource.SelectParameters("FirstTime").DefaultValue I'm a newbye and despite this simple thing that in normal ASP is very easy to do!!! Please help me!Thanks in advance!
The postage and packing scheme being used at the site I'm working on depends on the customer's location.
If they're in the UK they get once scheme and if they're in Ireland they get another. Furthermore, if they're anywhere else they get another scheme.
A customer's country is indicated by a 'countryID' stored in the main customer row in the database. (This ID references a country in the Countries table.)
Thus, I was wondering if it is acceptable to hard code the country pk of the UK and Ireland into the formular which works out the postage and packing?
At present, for a similar issue, I've even hard coded the the pk of UK and Ireland into some Javascript running at the client.
Is it fair design to work with a hard coded pk like this?
hi,,,here is the issueon the site there is a multiline text box.people type in muliple lines including soft line breaks(soft return (shift+enter)) and hard line breaks (hard return(enter)). the data gets stored in the SQL server 2000 database. In enterprise manager I can see the returns (the line breaks), but when the same data gets posted back to the page it loose those soft and hard returns,whats going onhelp
I'm on day 3 of troubleshooting.... still no luck!Very simple application. Extract data from the database and populate the GridView.Following error message:Server Error in '/WebSite3' Application.
An attempt to attach an auto-named database for file C:Webspaceigiriabcabc.comwwwWebSite3App_DataDatabase.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.-------------------------------------------------Look familar? so i Googled it and tried everything, from removing the User Instance = true in the web.config and even adding intial Catalog = Database and it didnt work.Instead i was greeted with a Newer Error msg:Server Error in '/WebSite3' Application.
Could not open new database 'Database'. CREATE DATABASE is aborted.Could not attach file 'C:Webspaceigiriabcabc.comwwwWebSite3App_DataDatabase.mdf' as database 'Database'.File activation failure. The physical file name "D:-- Work Documents --WorkezabcWebSite3App_DataDatabase_log.LDF" may be incorrect.The log cannot be rebuilt when the primary file is read-only.Next i have no idea why it's pointing back to my location directory hence that could be the issue but i'll leave it to you experts for your valueable input.It's uploaded onto a web server running ASP 2.0 and MS SQL Server 2005 i thinkFeel free to ask if you need additional informationThanks for reading!
I thought a stored procedure was taking much too long to complete. So, I moved the main query to Query Analyzer and found that when I substituted actual values for variables that my SELECT statement ran in seconds. Just to test, I created DECLARE statements, set the variables equal to the same values, re-ran the SELECT statement and it took over a minute. Even the Execution Plan was much different. Any suggestions?
Hello I was hoping somebody out there could help me …..
We have a hard-coded application which uses the Sa account with no password. We want to add password to Sa – but when we do get users/DBAs calling us saying the application does not work.
How can we add password to Sa and get the application to work - unfortunately we do not have scripts for the application or know of the whereabouts of the developers.
Any suggestions/ideas – will be greatly appreciated
Could some of the technical gurus we have please help me to solve this problem.!!!
We lost power, and when the generator came on our system crashed and the data and the hardware was gone. We initialized the disk and we lost our backups. There is no tape backup in the company. Thanks for your help.
I have a general question. Would SQL server have slower performance if you placed the ldf or mdf files on a dynamic drive setup or should it always be basic? I noticed that a server had 2 dynamic drives and the log files and mdf files are located on these drives. Usually I see all the drives as basic not dynamic. Does this even matter?
I want to perform backups to a network drive. I need to know if I can access the backup drive via UNC. I have not been able to get it to work and, for now, I would just like to know if what I am trying to do SHOULD work.
For example I want to backup to device mdtnts_prod02LM2BackupNameBack.DAT.
I monitor a few "perfmon" counters which includes under the "system" object, "bytes transmitted/sec" and "file read bytes/sec". Every once in awhile, these counters will skyrocket, which can also be verified by the hard drive lights flickering like mad.
The only software installed on the machine is SQL Server 2K.
I was wondering if anyone knew how I could monitor within SQL 2K what process or user is using all of these cycles. If anyone could shed some light on this it would be greatly appreciated. Specifically, I would like to find out which database/query is doing this to minimize in the future as this affects all of the other connections.